1
|
Petersen SE, Seitzman BA, Nelson SM, Wig GS, Gordon EM. Principles of cortical areas and their implications for neuroimaging. Neuron 2024; 112:2837-2853. [PMID: 38834069 DOI: 10.1016/j.neuron.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 04/11/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024]
Abstract
Cortical organization should constrain the study of how the brain performs behavior and cognition. A fundamental concept in cortical organization is that of arealization: that the cortex is parceled into discrete areas. In part one of this report, we review how non-human animal studies have illuminated principles of cortical arealization by revealing: (1) what defines a cortical area, (2) how cortical areas are formed, (3) how cortical areas interact with one another, and (4) what "computations" or "functions" areas perform. In part two, we discuss how these principles apply to neuroimaging research. In doing so, we highlight several examples where the commonly accepted interpretation of neuroimaging observations requires assumptions that violate the principles of arealization, including nonstationary areas that move on short time scales, large-scale gradients as organizing features, and cortical areas with singular functionality that perfectly map psychological constructs. Our belief is that principles of neurobiology should strongly guide the nature of computational explanations.
Collapse
Affiliation(s)
- Steven E Petersen
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA; Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130, USA; Department of Pediatrics, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Benjamin A Seitzman
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Steven M Nelson
- Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN 55455, USA; Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
| | - Gagan S Wig
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX 75235, USA; Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Evan M Gordon
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| |
Collapse
|
2
|
Mischler G, Keshishian M, Bickel S, Mehta AD, Mesgarani N. Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex. Neuroimage 2023; 266:119819. [PMID: 36529203 PMCID: PMC10510744 DOI: 10.1016/j.neuroimage.2022.119819] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/28/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Collapse
Affiliation(s)
- Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Menoua Keshishian
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Ashesh D Mehta
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States.
| |
Collapse
|
3
|
Wingfield C, Zhang C, Devereux B, Fonteneau E, Thwaites A, Liu X, Woodland P, Marslen-Wilson W, Su L. On the similarities of representations in artificial and brain neural networks for speech recognition. Front Comput Neurosci 2022; 16:1057439. [PMID: 36618270 PMCID: PMC9811675 DOI: 10.3389/fncom.2022.1057439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction In recent years, machines powered by deep learning have achieved near-human levels of performance in speech recognition. The fields of artificial intelligence and cognitive neuroscience have finally reached a similar level of performance, despite their huge differences in implementation, and so deep learning models can-in principle-serve as candidates for mechanistic models of the human auditory system. Methods Utilizing high-performance automatic speech recognition systems, and advanced non-invasive human neuroimaging technology such as magnetoencephalography and multivariate pattern-information analysis, the current study aimed to relate machine-learned representations of speech to recorded human brain representations of the same speech. Results In one direction, we found a quasi-hierarchical functional organization in human auditory cortex qualitatively matched with the hidden layers of deep artificial neural networks trained as part of an automatic speech recognizer. In the reverse direction, we modified the hidden layer organization of the artificial neural network based on neural activation patterns in human brains. The result was a substantial improvement in word recognition accuracy and learned speech representations. Discussion We have demonstrated that artificial and brain neural networks can be mutually informative in the domain of speech recognition.
Collapse
Affiliation(s)
- Cai Wingfield
- Department of Psychology, Lancaster University, Lancaster, United Kingdom
| | - Chao Zhang
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Barry Devereux
- School of Electronics, Electrical Engineering and Computer Science, Queens University Belfast, Belfast, United Kingdom
| | - Elisabeth Fonteneau
- Department of Psychology, University Paul Valéry Montpellier, Montpellier, France
| | - Andrew Thwaites
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Xunying Liu
- Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Phil Woodland
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | | | - Li Su
- Department of Neuroscience, Neuroscience Institute, Insigneo Institute for in silico Medicine, University of Sheffield, Sheffield, United Kingdom,Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom,*Correspondence: Li Su
| |
Collapse
|
4
|
Norman-Haignere SV, Feather J, Boebinger D, Brunner P, Ritaccio A, McDermott JH, Schalk G, Kanwisher N. A neural population selective for song in human auditory cortex. Curr Biol 2022; 32:1470-1484.e12. [PMID: 35196507 PMCID: PMC9092957 DOI: 10.1016/j.cub.2022.01.069] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/26/2021] [Accepted: 01/24/2022] [Indexed: 12/18/2022]
Abstract
How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Institute, Columbia University, New York, NY, USA; HHMI Fellow of the Life Sciences Research Foundation, Chevy Chase, MD, USA; Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, ENS, PSL University, CNRS, Paris, France; Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, USA; Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, USA; Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, NY, USA; National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Neurosurgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Anthony Ritaccio
- Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Neurology, Mayo Clinic, Jacksonville, FL, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
5
|
Norman-Haignere SV, Long LK, Devinsky O, Doyle W, Irobunda I, Merricks EM, Feldstein NA, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Multiscale temporal integration organizes hierarchical computation in human auditory cortex. Nat Hum Behav 2022; 6:455-469. [PMID: 35145280 PMCID: PMC8957490 DOI: 10.1038/s41562-021-01261-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 11/18/2021] [Indexed: 01/11/2023]
Abstract
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,HHMI Postdoctoral Fellow of the Life Sciences Research Foundation
| | - Laura K. Long
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University
| | - Orrin Devinsky
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center
| | - Werner Doyle
- Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Neurosurgery, NYU Langone Medical Center
| | - Ifeoma Irobunda
- Department of Neurology, Columbia University Irving Medical Center
| | | | - Neil A. Feldstein
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | - Guy M. McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | | | - Adeen Flinker
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Biomedical Engineering, NYU Tandon School of Engineering
| | - Nima Mesgarani
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University,Department of Electrical Engineering, Columbia University
| |
Collapse
|
6
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
7
|
Functionally homologous representation of vocalizations in the auditory cortex of humans and macaques. Curr Biol 2021; 31:4839-4844.e4. [PMID: 34506729 PMCID: PMC8585503 DOI: 10.1016/j.cub.2021.08.043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 07/08/2021] [Accepted: 08/13/2021] [Indexed: 11/21/2022]
Abstract
How the evolution of speech has transformed the human auditory cortex compared to other primates remains largely unknown. While primary auditory cortex is organized largely similarly in humans and macaques,1 the picture is much less clear at higher levels of the anterior auditory pathway,2 particularly regarding the processing of conspecific vocalizations (CVs). A "voice region" similar to the human voice-selective areas3,4 has been identified in the macaque right anterior temporal lobe with functional MRI;5 however, its anatomical localization, seemingly inconsistent with that of the human temporal voice areas (TVAs), has suggested a "repositioning of the voice area" in recent human evolution.6 Here we report a functional homology in the cerebral processing of vocalizations by macaques and humans, using comparative fMRI and a condition-rich auditory stimulation paradigm. We find that the anterior temporal lobe of both species possesses cortical voice areas that are bilateral and not only prefer conspecific vocalizations but also implement a representational geometry categorizing them apart from all other sounds in a species-specific but homologous manner. These results reveal a more similar functional organization of higher-level auditory cortex in macaques and humans than currently known.
Collapse
|
8
|
Russ BE, Petkov CI, Kwok SC, Zhu Q, Belin P, Vanduffel W, Hamed SB. Common functional localizers to enhance NHP & cross-species neuroscience imaging research. Neuroimage 2021; 237:118203. [PMID: 34048898 PMCID: PMC8529529 DOI: 10.1016/j.neuroimage.2021.118203] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 05/15/2021] [Accepted: 05/24/2021] [Indexed: 11/25/2022] Open
Abstract
Functional localizers are invaluable as they can help define regions of interest, provide cross-study comparisons, and most importantly, allow for the aggregation and meta-analyses of data across studies and laboratories. To achieve these goals within the non-human primate (NHP) imaging community, there is a pressing need for the use of standardized and validated localizers that can be readily implemented across different groups. The goal of this paper is to provide an overview of the value of localizer protocols to imaging research and we describe a number of commonly used or novel localizers within NHPs, and keys to implement them across studies. As has been shown with the aggregation of resting-state imaging data in the original PRIME-DE submissions, we believe that the field is ready to apply the same initiative for task-based functional localizers in NHP imaging. By coming together to collect large datasets across research group, implementing the same functional localizers, and sharing the localizers and data via PRIME-DE, it is now possible to fully test their robustness, selectivity and specificity. To do this, we reviewed a number of common localizers and we created a repository of well-established localizer that are easily accessible and implemented through the PRIME-RE platform.
Collapse
Affiliation(s)
- Brian E Russ
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, United States; Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York City, NY, United States; Department of Psychiatry, New York University at Langone, New York City, NY, United States.
| | - Christopher I Petkov
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, United Kingdom
| | - Sze Chai Kwok
- Shanghai Key Laboratory of Brain Functional Genomics, Key Laboratory of Brain Functional Genomics Ministry of Education, Shanghai Key Laboratory of Magnetic Resonance, Affiliated Mental Health Center (ECNU), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China; Division of Natural and Applied Sciences, Duke Kunshan University, Kunshan, Jiangsu, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Qi Zhu
- Cognitive Neuroimaging Unit, INSERM, CEA, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Laboratory for Neuro-and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, 3000, Belgium
| | - Pascal Belin
- Institut de Neurosciences de La Timone, Aix-Marseille Université et CNRS, Marseille, 13005, France
| | - Wim Vanduffel
- Laboratory for Neuro-and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, 3000, Belgium; Leuven Brain Institute, KU Leuven, Leuven, 3000, Belgium; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA 02144, United States.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229, Université de Lyon - CNRS, France.
| |
Collapse
|
9
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
10
|
Neuronal figure-ground responses in primate primary auditory cortex. Cell Rep 2021; 35:109242. [PMID: 34133935 PMCID: PMC8220257 DOI: 10.1016/j.celrep.2021.109242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 12/09/2020] [Accepted: 05/20/2021] [Indexed: 11/22/2022] Open
Abstract
Figure-ground segregation, the brain’s ability to group related features into stable perceptual entities, is crucial for auditory perception in noisy environments. The neuronal mechanisms for this process are poorly understood in the auditory system. Here, we report figure-ground modulation of multi-unit activity (MUA) in the primary and non-primary auditory cortex of rhesus macaques. Across both regions, MUA increases upon presentation of auditory figures, which consist of coherent chord sequences. We show increased activity even in the absence of any perceptual decision, suggesting that neural mechanisms for perceptual grouping are, to some extent, independent of behavioral demands. Furthermore, we demonstrate differences in figure encoding between more anterior and more posterior regions; perceptual saliency is represented in anterior cortical fields only. Our results suggest an encoding of auditory figures from the earliest cortical stages by a rate code. Neuronal figure-ground modulation in primary auditory cortex A rate code is used to signal the presence of auditory figures Anteriorly located recording sites encode perceptual saliency Figure-ground modulation is present without perceptual detection
Collapse
|
11
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
12
|
Hajizadeh A, Matysiak A, Brechmann A, König R, May PJC. Why do humans have unique auditory event-related fields? Evidence from computational modeling and MEG experiments. Psychophysiology 2021; 58:e13769. [PMID: 33475173 DOI: 10.1111/psyp.13769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/04/2020] [Accepted: 12/20/2020] [Indexed: 11/28/2022]
Abstract
Auditory event-related fields (ERFs) measured with magnetoencephalography (MEG) are useful for studying the neuronal underpinnings of auditory cognition in human cortex. They have a highly subject-specific morphology, albeit certain characteristic deflections (e.g., P1m, N1m, and P2m) can be identified in most subjects. Here, we explore the reason for this subject-specificity through a combination of MEG measurements and computational modeling of auditory cortex. We test whether ERF subject-specificity can predominantly be explained in terms of each subject having an individual cortical gross anatomy, which modulates the MEG signal, or whether individual cortical dynamics is also at play. To our knowledge, this is the first time that tools to address this question are being presented. The effects of anatomical and dynamical variation on the MEG signal is simulated in a model describing the core-belt-parabelt structure of the auditory cortex, and with the dynamics based on the leaky-integrator neuron model. The experimental and simulated ERFs are characterized in terms of the N1m amplitude, latency, and width. Also, we examine the waveform grand-averaged across subjects, and the standard deviation of this grand average. The results show that the intersubject variability of the ERF arises out of both the anatomy and the dynamics of auditory cortex being specific to each subject. Moreover, our results suggest that the latency variation of the N1m is largely related to subject-specific dynamics. The findings are discussed in terms of how learning, plasticity, and sound detection are reflected in the auditory ERFs. The notion of the grand-averaged ERF is critically evaluated.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Artur Matysiak
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Magdeburg, Germany
| | - Reinhard König
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Patrick J C May
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany.,Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
13
|
Griffiths TD, Lad M, Kumar S, Holmes E, McMurray B, Maguire EA, Billig AJ, Sedley W. How Can Hearing Loss Cause Dementia? Neuron 2020; 108:401-412. [PMID: 32871106 PMCID: PMC7664986 DOI: 10.1016/j.neuron.2020.08.003] [Citation(s) in RCA: 161] [Impact Index Per Article: 40.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/31/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
Abstract
Epidemiological studies identify midlife hearing loss as an independent risk factor for dementia, estimated to account for 9% of cases. We evaluate candidate brain bases for this relationship. These bases include a common pathology affecting the ascending auditory pathway and multimodal cortex, depletion of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive resources when listening in difficult conditions. We also put forward an alternate mechanism, drawing on new insights into the role of the medial temporal lobe in auditory cognition. In particular, we consider how aberrant activity in the service of auditory pattern analysis, working memory, and object processing may interact with dementia pathology in people with hearing loss. We highlight how the effect of hearing interventions on dementia depends on the specific mechanism and suggest avenues for work at the molecular, neuronal, and systems levels to pin this down.
Collapse
Affiliation(s)
- Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA.
| | - Meher Lad
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Sukhbinder Kumar
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, IA 52242, USA
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | | | - William Sedley
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
14
|
Sohoglu E, Kumar S, Chait M, Griffiths TD. Multivoxel codes for representing and integrating acoustic features in human cortex. Neuroimage 2020; 217:116661. [PMID: 32081785 PMCID: PMC7339141 DOI: 10.1016/j.neuroimage.2020.116661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/13/2020] [Accepted: 02/15/2020] [Indexed: 10/25/2022] Open
Abstract
Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.
Collapse
Affiliation(s)
- Ediz Sohoglu
- School of Psychology, University of Sussex, Brighton, BN1 9QH, United Kingdom.
| | - Sukhbinder Kumar
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Maria Chait
- Ear Institute, University College London, London, United Kingdom
| | - Timothy D Griffiths
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| |
Collapse
|
15
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
16
|
Besle J, Mougin O, Sánchez-Panchuelo RM, Lanting C, Gowland P, Bowtell R, Francis S, Krumbholz K. Is Human Auditory Cortex Organization Compatible With the Monkey Model? Contrary Evidence From Ultra-High-Field Functional and Structural MRI. Cereb Cortex 2020; 29:410-428. [PMID: 30357410 PMCID: PMC6294415 DOI: 10.1093/cercor/bhy267] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Indexed: 11/14/2022] Open
Abstract
It is commonly assumed that the human auditory cortex is organized similarly to that of macaque monkeys, where the primary region, or "core," is elongated parallel to the tonotopic axis (main direction of tonotopic gradients), and subdivided across this axis into up to 3 distinct areas (A1, R, and RT), with separate, mirror-symmetric tonotopic gradients. This assumption, however, has not been tested until now. Here, we used high-resolution ultra-high-field (7 T) magnetic resonance imaging (MRI) to delineate the human core and map tonotopy in 24 individual hemispheres. In each hemisphere, we assessed tonotopic gradients using principled, quantitative analysis methods, and delineated the core using 2 independent (functional and structural) MRI criteria. Our results indicate that, contrary to macaques, the human core is elongated perpendicular rather than parallel to the main tonotopic axis, and that this axis contains no more than 2 mirror-reversed gradients within the core region. Previously suggested homologies between these gradients and areas A1 and R in macaques were not supported. Our findings suggest fundamental differences in auditory cortex organization between humans and macaques.
Collapse
Affiliation(s)
- Julien Besle
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK.,Department of Psychology, American University of Beirut, Riad El-Solh, Beirut, Lebanon
| | - Olivier Mougin
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Rosa-María Sánchez-Panchuelo
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Cornelis Lanting
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK.,Department of Otorhinolaryngology, Radboud University Medical Center, University of Nijmegen, Nijmegen, Netherlands
| | - Penny Gowland
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Richard Bowtell
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Susan Francis
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Katrin Krumbholz
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK
| |
Collapse
|
17
|
Direct electrophysiological mapping of human pitch-related processing in auditory cortex. Neuroimage 2019; 202:116076. [PMID: 31401239 DOI: 10.1016/j.neuroimage.2019.116076] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 07/28/2019] [Accepted: 08/05/2019] [Indexed: 11/23/2022] Open
Abstract
This work sought correlates of pitch perception, defined by neural activity above the lower limit of pitch (LLP), in auditory cortical neural ensembles, and examined their topographical distribution. Local field potentials (LFPs) were recorded in eight patients undergoing invasive recordings for pharmaco-resistant epilepsy. Stimuli consisted of bursts of broadband noise followed by regular interval noise (RIN). RIN was presented at rates below and above the LLP to distinguish responses related to the regularity of the stimulus and the presence of pitch itself. LFPs were recorded from human cortical homologues of auditory core, belt, and parabelt regions using multicontact depth electrodes implanted in Heschl's gyrus (HG) and Planum Temporale (PT), and subdural grid electrodes implanted over lateral superior temporal gyrus (STG). Evoked responses corresponding to the temporal regularity of the stimulus were assessed using autocorrelation of the evoked responses, and occurred for stimuli below and above the LLP. Induced responses throughout the high gamma range (60-200 Hz) were present for pitch values above the LLP, with onset latencies of approximately 70 ms. Mapping of the induced responses onto a common brain space demonstrated variability in the topographical distribution of high gamma responses across subjects. Induced responses were present throughout the length of HG and on PT, which is consistent with previous functional neuroimaging studies. Moreover, in each subject, a region within lateral STG showed robust induced responses at pitch-evoking stimulus rates. This work suggests a distributed representation of pitch processing in neural ensembles in human homologues of core and non-core auditory cortex.
Collapse
|
18
|
Kikuchi Y, Kumar S, Baumann S, Overath T, Gander PE, Sedley W, Patterson RD, Petkov CI, Griffiths TD. The distribution and nature of responses to broadband sounds associated with pitch in the macaque auditory cortex. Cortex 2019; 120:340-352. [PMID: 31401401 DOI: 10.1016/j.cortex.2019.07.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 03/25/2019] [Accepted: 07/10/2019] [Indexed: 11/30/2022]
Abstract
The organisation of pitch-perception mechanisms in the primate cortex is controversial, in that divergent results have been obtained, ranging from a single circumscribed 'pitch centre' to systems widely distributed across auditory cortex. Possible reasons for such discrepancies include different species, recording techniques, pitch stimuli, sampling of auditory fields, and the neural metrics recorded. In the present study, we sought to bridge some of these divisions by examining activity related to pitch in both neurons and neuronal ensembles within the auditory cortex of the rhesus macaque, a primate species with similar pitch perception and auditory cortical organisation to humans. We demonstrate similar responses, in primary and non-primary auditory cortex, to two different types of broadband pitch above the macaque lower limit in both neurons and local field potential (LFP) gamma oscillations. The majority of broadband pitch responses in neurons and LFP sites did not show equivalent tuning for sine tones.
Collapse
Affiliation(s)
- Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK; Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK.
| | - Sukhbinder Kumar
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK; Wellcome Trust Centre for Neuroimaging, University College London, UK
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | | | - William Sedley
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK; Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK; Wellcome Trust Centre for Neuroimaging, University College London, UK; Department of Neurosurgery, University of Iowa, Iowa City, USA.
| |
Collapse
|
19
|
Norman-Haignere SV, Kanwisher N, McDermott JH, Conway BR. Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones. Nat Neurosci 2019; 22:1057-1060. [PMID: 31182868 PMCID: PMC6592717 DOI: 10.1038/s41593-019-0410-7] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 04/19/2019] [Indexed: 12/02/2022]
Abstract
We report a difference between humans and macaque monkeys in the functional organization of cortical regions implicated in pitch perception: humans but not macaques showed regions with a strong preference for harmonic sounds compared to noise, measured with both synthetic tones and macaque vocalizations. In contrast, frequency-selective tonotopic maps were similar between the two species. This species difference may be driven by the unique demands of speech and music perception in humans.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Institute for Mind, Brain and Behavior, Columbia University, New York, NY, USA. .,Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA. .,HHMI Postdoctoral Fellow of the Life Sciences Research Institute, Chevy Chase, MD, USA. .,Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL University, CNRS, Paris, France.
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.,McGovern Institute for Brain Research, Cambridge, MA, USA.,Center for Minds, Brains and Machines, Cambridge, MA, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.,McGovern Institute for Brain Research, Cambridge, MA, USA.,Center for Minds, Brains and Machines, Cambridge, MA, USA.,Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Bevil R Conway
- Laboratory of Sensorimotor Research, NEI, NIH, Bethesda, MD, USA. .,National Institute of Mental Health, NIH, Bethesda, MD, USA. .,National Institute of Neurological Disease and Stroke, NIH, Bethesda, MD, USA.
| |
Collapse
|
20
|
Hajizadeh A, Matysiak A, May PJC, König R. Explaining event-related fields by a mechanistic model encapsulating the anatomical structure of auditory cortex. BIOLOGICAL CYBERNETICS 2019; 113:321-345. [PMID: 30820663 PMCID: PMC6510841 DOI: 10.1007/s00422-019-00795-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 02/08/2019] [Indexed: 06/09/2023]
Abstract
Event-related fields of the magnetoencephalogram are triggered by sensory stimuli and appear as a series of waves extending hundreds of milliseconds after stimulus onset. They reflect the processing of the stimulus in cortex and have a highly subject-specific morphology. However, we still have an incomplete picture of how event-related fields are generated, what the various waves signify, and why they are so subject-specific. Here, we focus on this problem through the lens of a computational model which describes auditory cortex in terms of interconnected cortical columns as part of hierarchically placed fields of the core, belt, and parabelt areas. We develop an analytical approach arriving at solutions to the system dynamics in terms of normal modes: damped harmonic oscillators emerging out of the coupled excitation and inhibition in the system. Each normal mode is a global feature which depends on the anatomical structure of the entire auditory cortex. Further, normal modes are fundamental dynamical building blocks, in that the activity of each cortical column represents a combination of all normal modes. This approach allows us to replicate a typical auditory event-related response as a weighted sum of the single-column activities. Our work offers an alternative to the view that the event-related field arises out of spatially discrete, local generators. Rather, there is only a single generator process distributed over the entire network of the auditory cortex. We present predictions for testing to what degree subject-specificity is due to cross-subject variations in dynamical parameters rather than in the cortical surface morphology.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Artur Matysiak
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Patrick J. C. May
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF UK
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Reinhard König
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| |
Collapse
|
21
|
Maffei C, Sarubbo S, Jovicich J. A Missing Connection: A Review of the Macrostructural Anatomy and Tractography of the Acoustic Radiation. Front Neuroanat 2019; 13:27. [PMID: 30899216 PMCID: PMC6416820 DOI: 10.3389/fnana.2019.00027] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Accepted: 02/15/2019] [Indexed: 12/13/2022] Open
Abstract
The auditory system of mammals is dedicated to encoding, elaborating and transporting acoustic information from the auditory nerve to the auditory cortex. The acoustic radiation (AR) constitutes the thalamo-cortical projection of this system, conveying the auditory signals from the medial geniculate nucleus (MGN) of the thalamus to the transverse temporal gyrus on the superior temporal lobe. While representing one of the major sensory pathways of the primate brain, the currently available anatomical information of this white matter bundle is quite limited in humans, thus constituting a notable omission in clinical and general studies on auditory processing and language perception. Tracing procedures in humans have restricted applications, and the in vivo reconstruction of this bundle using diffusion tractography techniques remains challenging. Hence, a more accurate and reliable reconstruction of the AR is necessary for understanding the neurobiological substrates supporting audition and language processing mechanisms in both health and disease. This review aims to unite available information on the macroscopic anatomy and topography of the AR in humans and non-human primates. Particular attention is brought to the anatomical characteristics that make this bundle difficult to reconstruct using non-invasive techniques, such as diffusion-based tractography. Open questions in the field and possible future research directions are discussed.
Collapse
Affiliation(s)
- Chiara Maffei
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States.,Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Silvio Sarubbo
- Division of Neurosurgery, Structural and Functional Connectivity Lab Project, S. Chiara Hospital, Trento Azienda Provinciale per i Servizi Sanitari (APSS), Trento, Italy
| | - Jorge Jovicich
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy.,Department of Psychology and Cognitive Sciences, University of Trento, Trento, Italy
| |
Collapse
|
22
|
Schneider F, Dheerendra P, Balezeau F, Ortiz-Rios M, Kikuchi Y, Petkov CI, Thiele A, Griffiths TD. Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey. Sci Rep 2018; 8:17948. [PMID: 30560879 PMCID: PMC6298974 DOI: 10.1038/s41598-018-36903-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 11/14/2018] [Indexed: 01/08/2023] Open
Abstract
Segregating the key features of the natural world within crowded visual or sound scenes is a critical aspect of everyday perception. The neurobiological bases for auditory figure-ground segregation are poorly understood. We demonstrate that macaques perceive an acoustic figure-ground stimulus with comparable performance to humans using a neural system that involves high-level auditory cortex, localised to the rostral belt and parabelt.
Collapse
Affiliation(s)
- Felix Schneider
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom.
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom.
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Michael Ortiz-Rios
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| |
Collapse
|
23
|
Norman-Haignere SV, McDermott JH. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biol 2018; 16:e2005127. [PMID: 30507943 PMCID: PMC6292651 DOI: 10.1371/journal.pbio.2005127] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/13/2018] [Accepted: 11/08/2018] [Indexed: 11/19/2022] Open
Abstract
A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.
Collapse
Affiliation(s)
- Sam V. Norman-Haignere
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Zuckerman Institute of Mind, Brain and Behavior, Columbia University, New York, New York, United States of America
- Laboratoire des Sytèmes Perceptifs, Département d’Études Cognitives, ENS, PSL University, CNRS, Paris France
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
24
|
Erb J, Armendariz M, De Martino F, Goebel R, Vanduffel W, Formisano E. Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex. Cereb Cortex 2018; 29:3636-3650. [DOI: 10.1093/cercor/bhy243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 08/08/2018] [Accepted: 09/05/2018] [Indexed: 01/01/2023] Open
Abstract
Abstract
Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.
Collapse
Affiliation(s)
- Julia Erb
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Wim Vanduffel
- Laboratorium voor Neuro-en Psychofysiologie, KU Leuven, Leuven, Belgium
- MGH Martinos Center, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Leuven Brain Institute, Leuven, Belgium
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Maastricht Center for Systems Biology (MaCSBio), MD Maastricht, The Netherlands
| |
Collapse
|
25
|
Sound Frequency Representation in the Auditory Cortex of the Common Marmoset Visualized Using Optical Intrinsic Signal Imaging. eNeuro 2018; 5:eN-NWR-0078-18. [PMID: 29736410 PMCID: PMC5937112 DOI: 10.1523/eneuro.0078-18.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 03/27/2018] [Accepted: 03/29/2018] [Indexed: 11/21/2022] Open
Abstract
Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5-16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions.
Collapse
|
26
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
27
|
Oya H, Gander PE, Petkov CI, Adolphs R, Nourski KV, Kawasaki H, Howard MA, Griffiths TD. Neural phase locking predicts BOLD response in human auditory cortex. Neuroimage 2018; 169:286-301. [PMID: 29274745 PMCID: PMC6139034 DOI: 10.1016/j.neuroimage.2017.12.051] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Revised: 11/22/2017] [Accepted: 12/16/2017] [Indexed: 11/16/2022] Open
Abstract
Natural environments elicit both phase-locked and non-phase-locked neural responses to the stimulus in the brain. The interpretation of the BOLD signal to date has been based on an association of the non-phase-locked power of high-frequency local field potentials (LFPs), or the related spiking activity in single neurons or groups of neurons. Previous studies have not examined the prediction of the BOLD signal by phase-locked responses. We examined the relationship between the BOLD response and LFPs in the same nine human subjects from multiple corresponding points in the auditory cortex, using amplitude modulated pure tone stimuli of a duration to allow an analysis of phase locking of the sustained time period without contamination from the onset response. The results demonstrate that both phase locking at the modulation frequency and its harmonics, and the oscillatory power in gamma/high-gamma bands are required to predict the BOLD response. Biophysical models of BOLD signal generation in auditory cortex therefore require revision and the incorporation of both phase locking to rhythmic sensory stimuli and power changes in the ensemble neural activity.
Collapse
Affiliation(s)
- Hiroyuki Oya
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA.
| | - Phillip E Gander
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | | | - Ralph Adolphs
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kirill V Nourski
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Matthew A Howard
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Timothy D Griffiths
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, UK
| |
Collapse
|
28
|
Häkkinen S, Rinne T. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex. Brain Struct Funct 2018; 223:2113-2127. [DOI: 10.1007/s00429-018-1612-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Accepted: 01/14/2018] [Indexed: 12/29/2022]
|
29
|
Scott BH, Leccese PA, Saleem KS, Kikuchi Y, Mullarkey MP, Fukushima M, Mishkin M, Saunders RC. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey. Cereb Cortex 2018; 27:809-840. [PMID: 26620266 DOI: 10.1093/cercor/bhv277] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Paul A Leccese
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Kadharbatcha S Saleem
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Yukiko Kikuchi
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA.,Present address: Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne NE2 4HH, UK
| | - Matthew P Mullarkey
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Makoto Fukushima
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| | - Richard C Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health (NIMH/NIH), Bethesda, MD 20892, USA
| |
Collapse
|
30
|
Ayala YA, Lehmann A, Merchant H. Monkeys share the neurophysiological basis for encoding sound periodicities captured by the frequency-following response with humans. Sci Rep 2017; 7:16687. [PMID: 29192170 PMCID: PMC5709359 DOI: 10.1038/s41598-017-16774-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Accepted: 11/17/2017] [Indexed: 11/09/2022] Open
Abstract
The extraction and encoding of acoustical temporal regularities are fundamental for human cognitive auditory abilities such as speech or beat entrainment. Because the comparison of the neural sensitivity to temporal regularities between human and animals is fundamental to relate non-invasive measures of auditory processing to their neuronal basis, here we compared the neural representation of auditory periodicities between human and non-human primates by measuring scalp-recorded frequency-following response (FFR). We found that rhesus monkeys can resolve the spectrotemporal structure of periodic stimuli to a similar extent as humans by exhibiting a homologous FFR potential to the speech syllable /da/. The FFR in both species is robust and phase-locked to the fundamental frequency of the sound, reflecting an effective neural processing of the fast-periodic information of subsyllabic cues. Our results thus reveal a conserved neural ability to track acoustical regularities within the primate order. These findings open the possibility to study the neurophysiology of complex sound temporal processing in the macaque subcortical and cortical areas, as well as the associated experience-dependent plasticity across the auditory pathway in behaving monkeys.
Collapse
Affiliation(s)
- Yaneri A Ayala
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro. 76230, Mexico.
| | - Alexandre Lehmann
- Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research (BRAMS), Center for Research on Brain, Language and Music (CRBLM), Pavillon 1420, Montreal, QC H3C 3J7, Canada.,Department of Psychology, University of Montreal, Montreal, QC, Canada
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro. 76230, Mexico.
| |
Collapse
|
31
|
Wingfield C, Su L, Liu X, Zhang C, Woodland P, Thwaites A, Fonteneau E, Marslen-Wilson WD. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem. PLoS Comput Biol 2017; 13:e1005617. [PMID: 28945744 PMCID: PMC5612454 DOI: 10.1371/journal.pcbi.1005617] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Accepted: 06/12/2017] [Indexed: 01/06/2023] Open
Abstract
There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental 'machine states', generated as the ASR analysis progresses over time, to the incremental 'brain states', measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain.
Collapse
Affiliation(s)
- Cai Wingfield
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, University of Lancaster, Lancaster, United Kingdom
- * E-mail: (CW); (LS)
| | - Li Su
- China–UK Centre for Cognition and Ageing Research, Faculty of Psychology, Southwest University, Chongqing, China
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
- * E-mail: (CW); (LS)
| | - Xunying Liu
- Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Chao Zhang
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Phil Woodland
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Andrew Thwaites
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | - Elisabeth Fonteneau
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | - William D. Marslen-Wilson
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| |
Collapse
|
32
|
Nourski KV, Banks MI, Steinschneider M, Rhone AE, Kawasaki H, Mueller RN, Todd MM, Howard MA. Electrocorticographic delineation of human auditory cortical fields based on effects of propofol anesthesia. Neuroimage 2017; 152:78-93. [PMID: 28254512 PMCID: PMC5432407 DOI: 10.1016/j.neuroimage.2017.02.061] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Revised: 02/13/2017] [Accepted: 02/21/2017] [Indexed: 12/20/2022] Open
Abstract
The functional organization of human auditory cortex remains incompletely characterized. While the posteromedial two thirds of Heschl's gyrus (HG) is generally considered to be part of core auditory cortex, additional subdivisions of HG remain speculative. To further delineate the hierarchical organization of human auditory cortex, we investigated regional heterogeneity in the modulation of auditory cortical responses under varying depths of anesthesia induced by propofol. Non-invasive studies have shown that propofol differentially affects auditory cortical activity, with a greater impact on non-core areas. Subjects were neurosurgical patients undergoing removal of intracranial electrodes placed to identify epileptic foci. Stimuli were 50Hz click trains, presented continuously during an awake baseline period, and subsequently, while propofol infusion was incrementally titrated to induce general anesthesia. Electrocorticographic recordings were made with depth electrodes implanted in HG and subdural grid electrodes implanted over superior temporal gyrus (STG). Depth of anesthesia was monitored using spectral entropy. Averaged evoked potentials (AEPs), frequency-following responses (FFRs) and high gamma (70-150Hz) event-related band power were used to characterize auditory cortical activity. Based on the changes in AEPs and FFRs during the induction of anesthesia, posteromedial HG could be divided into two subdivisions. In the most posteromedial aspect of the gyrus, the earliest AEP deflections were preserved and FFRs increased during induction. In contrast, the remainder of the posteromedial HG exhibited attenuation of both the AEP and the FFR. The anterolateral HG exhibited weaker activation characterized by broad, low-voltage AEPs and the absence of FFRs. Lateral STG exhibited limited activation by click trains, and FFRs there diminished during induction. Sustained high gamma activity was attenuated in the most posteromedial portion of HG, and was absent in all other regions. These differential patterns of auditory cortical activity during the induction of anesthesia may serve as useful physiological markers for field delineation. In this study, the posteromedial HG could be parcellated into at least two subdivisions. Preservation of the earliest AEP deflections and FFRs in the posteromedial HG likely reflects the persistence of feedforward synaptic activity generated by inputs from subcortical auditory pathways, including the medial geniculate nucleus.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA.
| | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin - Madison, Madison, WI, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rashmi N Mueller
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA
| | - Michael M Todd
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA; Department of Anesthesiology, University of Minnesota, Minneapolis, MN, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
33
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
34
|
High-Resolution fMRI of Auditory Cortical Map Changes in Unilateral Hearing Loss and Tinnitus. Brain Topogr 2017; 30:685-697. [DOI: 10.1007/s10548-017-0547-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 01/18/2017] [Indexed: 12/19/2022]
|
35
|
Within brain area tractography suggests local modularity using high resolution connectomics. Sci Rep 2017; 7:39859. [PMID: 28054634 PMCID: PMC5213837 DOI: 10.1038/srep39859] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2016] [Accepted: 11/29/2016] [Indexed: 12/19/2022] Open
Abstract
Previous structural brain connectivity studies have mainly focussed on the macroscopic scale of around 1,000 or fewer brain areas (network nodes). However, it has recently been demonstrated that high resolution structural connectomes of around 50,000 nodes can be generated reproducibly. In this study, we infer high resolution brain connectivity matrices using diffusion imaging data from the Human Connectome Project. With such high resolution we are able to analyse networks within brain areas in a single subject. We show that the global network has a scale invariant topological organisation, which means there is a hierarchical organisation of the modular architecture. Specifically, modules within brain areas are spatially localised. We find that long range connections terminate between specific modules, whilst short range connections via highly curved association fibers terminate within modules. We suggest that spatial locations of white matter modules overlap with cytoarchitecturally distinct grey matter areas and may serve as the structural basis for function specialisation within brain areas. Future studies might elucidate how brain diseases change this modular architecture within brain areas.
Collapse
|
36
|
Tonotopic representation of loudness in the human cortex. Hear Res 2016; 344:244-254. [PMID: 27915027 PMCID: PMC5256480 DOI: 10.1016/j.heares.2016.11.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 11/24/2016] [Accepted: 11/29/2016] [Indexed: 12/25/2022]
Abstract
A prominent feature of the auditory system is that neurons show tuning to audio frequency; each neuron has a characteristic frequency (CF) to which it is most sensitive. Furthermore, there is an orderly mapping of CF to position, which is called tonotopic organization and which is observed at many levels of the auditory system. In a previous study (Thwaites et al., 2016) we examined cortical entrainment to two auditory transforms predicted by a model of loudness, instantaneous loudness and short-term loudness, using speech as the input signal. The model is based on the assumption that neural activity is combined across CFs (i.e. across frequency channels) before the transform to short-term loudness. However, it is also possible that short-term loudness is determined on a channel-specific basis. Here we tested these possibilities by assessing neural entrainment to the overall and channel-specific instantaneous loudness and the overall and channel-specific short-term loudness. The results showed entrainment to channel-specific instantaneous loudness at latencies of 45 and 100 ms (bilaterally, in and around Heschl's gyrus). There was entrainment to overall instantaneous loudness at 165 ms in dorso-lateral sulcus (DLS). Entrainment to overall short-term loudness occurred primarily at 275 ms, bilaterally in DLS and superior temporal sulcus. There was only weak evidence for entrainment to channel-specific short-term loudness. The latency of cortical entrainment to various aspects of loudness was assessed. For channel-specific instantaneous loudness the latencies were 45 and 100 ms. For overall instantaneous loudness the latency was 165 ms. For overall short-term loudness the latency was 275 ms. Entrainment to channel-specific short-term loudness was weak.
Collapse
|
37
|
Gardumi A, Ivanov D, Havlicek M, Formisano E, Uludağ K. Tonotopic maps in human auditory cortex using arterial spin labeling. Hum Brain Mapp 2016; 38:1140-1154. [PMID: 27790786 PMCID: PMC5324648 DOI: 10.1002/hbm.23444] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Revised: 09/27/2016] [Accepted: 10/11/2016] [Indexed: 11/08/2022] Open
Abstract
A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo‐continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF‐based tonotopy and found a good agreement with BOLD signal‐based tonotopy, despite the lower contrast‐to‐noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high‐low‐high frequency gradients, co‐located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD‐based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus‐induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp 38:1140–1154, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Anna Gardumi
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Dimo Ivanov
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Martin Havlicek
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Kâmil Uludağ
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
38
|
Tuning to Binaural Cues in Human Auditory Cortex. J Assoc Res Otolaryngol 2016; 17:37-53. [PMID: 26466943 DOI: 10.1007/s10162-015-0546-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 09/25/2015] [Indexed: 10/22/2022] Open
Abstract
Interaural level and time differences (ILD and ITD), the primary binaural cues for sound localization in azimuth, are known to modulate the tuned responses of neurons in mammalian auditory cortex (AC). The majority of these neurons respond best to cue values that favor the contralateral ear, such that contralateral bias is evident in the overall population response and thereby expected in population-level functional imaging data. Human neuroimaging studies, however, have not consistently found contralaterally biased binaural response patterns. Here, we used functional magnetic resonance imaging (fMRI) to parametrically measure ILD and ITD tuning in human AC. For ILD, contralateral tuning was observed, using both univariate and multivoxel analyses, in posterior superior temporal gyrus (pSTG) in both hemispheres. Response-ILD functions were U-shaped, revealing responsiveness to both contralateral and—to a lesser degree—ipsilateral ILD values, consistent with rate coding by unequal populations of contralaterally and ipsilaterally tuned neurons. In contrast, for ITD, univariate analyses showed modest contralateral tuning only in left pSTG, characterized by a monotonic response-ITD function. A multivoxel classifier, however, revealed ITD coding in both hemispheres. Although sensitivity to ILD and ITD was distributed in similar AC regions, the differently shaped response functions and different response patterns across hemispheres suggest that basic ILD and ITD processes are not fully integrated in human AC. The results support opponent-channel theories of ILD but not necessarily ITD coding, the latter of which may involve multiple types of representation that differ across hemispheres.
Collapse
|
39
|
Abstract
UNLABELLED Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior-posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or "periodotopy," are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale "periodotopic" organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds.
Collapse
|
40
|
Frequency preference and attention effects across cortical depths in the human primary auditory cortex. Proc Natl Acad Sci U S A 2015; 112:16036-41. [PMID: 26668397 DOI: 10.1073/pnas.1507552112] [Citation(s) in RCA: 113] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Collapse
|
41
|
Kumar S, Bonnici HM, Teki S, Agus TR, Pressnitzer D, Maguire EA, Griffiths TD. Representations of specific acoustic patterns in the auditory cortex and hippocampus. Proc Biol Sci 2015; 281:20141000. [PMID: 25100695 PMCID: PMC4132675 DOI: 10.1098/rspb.2014.1000] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
Collapse
Affiliation(s)
- Sukhbinder Kumar
- Institute of Neuroscience, Medical School, Newcastle University, Newcastle upon Tyne NE2 4HH, UK Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Heidi M Bonnici
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Sundeep Teki
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Trevor R Agus
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, and Ecole Normale Superieure, Paris, France
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, and Ecole Normale Superieure, Paris, France
| | - Eleanor A Maguire
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Timothy D Griffiths
- Institute of Neuroscience, Medical School, Newcastle University, Newcastle upon Tyne NE2 4HH, UK Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| |
Collapse
|
42
|
Schönwiesner M, Dechent P, Voit D, Petkov CI, Krumbholz K. Parcellation of Human and Monkey Core Auditory Cortex with fMRI Pattern Classification and Objective Detection of Tonotopic Gradient Reversals. Cereb Cortex 2015; 25:3278-89. [PMID: 24904067 PMCID: PMC4585487 DOI: 10.1093/cercor/bhu124] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Auditory cortex (AC) contains several primary-like, or "core," fields, which receive thalamic input and project to non-primary "belt" fields. In humans, the organization and layout of core and belt auditory fields are still poorly understood, and most auditory neuroimaging studies rely on macroanatomical criteria, rather than functional localization of distinct fields. A myeloarchitectonic method has been suggested recently for distinguishing between core and belt fields in humans (Dick F, Tierney AT, Lutti A, Josephs O, Sereno MI, Weiskopf N. 2012. In vivo functional and myeloarchitectonic mapping of human primary auditory areas. J Neurosci. 32:16095-16105). We propose a marker for core AC based directly on functional magnetic resonance imaging (fMRI) data and pattern classification. We show that a portion of AC in Heschl's gyrus classifies sound frequency more accurately than other regions in AC. Using fMRI data from macaques, we validate that the region where frequency classification performance is significantly above chance overlaps core auditory fields, predominantly A1. Within this region, we measure tonotopic gradients and estimate the locations of the human homologues of the core auditory subfields A1 and R. Our results provide a functional rather than anatomical localizer for core AC. We posit that inter-individual variability in the layout of core AC might explain disagreements between results from previous neuroimaging and cytological studies.
Collapse
Affiliation(s)
- Marc Schönwiesner
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Peter Dechent
- Department of Cognitive Neurology, MR-Research in Neurology and Psychiatry,University Medicine Göttingen, Göttingen, Germany
| | - Dirk Voit
- Biomedical NMR Research GmbH, Max-Planck-Institute for Biophysical Chemistry, Göttingen, Germany
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK
| | | |
Collapse
|
43
|
Da Costa S, Bourquin NMP, Knebel JF, Saenz M, van der Zwaag W, Clarke S. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI. PLoS One 2015; 10:e0124072. [PMID: 25938430 PMCID: PMC4418571 DOI: 10.1371/journal.pone.0124072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 02/25/2015] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.
Collapse
Affiliation(s)
- Sandra Da Costa
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Nathalie M.-P. Bourquin
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Jean-François Knebel
- National Center of Competence in Research, SYNAPSY—The Synaptic Bases of Mental Diseases, Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Wietske van der Zwaag
- Centre d’Imagerie BioMédicale, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| |
Collapse
|
44
|
Moerel M, De Martino F, Santoro R, Yacoub E, Formisano E. Representation of pitch chroma by multi-peak spectral tuning in human auditory cortex. Neuroimage 2015; 106:161-9. [PMID: 25479020 PMCID: PMC4388253 DOI: 10.1016/j.neuroimage.2014.11.044] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Revised: 10/31/2014] [Accepted: 11/20/2014] [Indexed: 01/04/2023] Open
Abstract
Musical notes played at octave intervals (i.e., having the same pitch chroma) are perceived as similar. This well-known perceptual phenomenon lays at the foundation of melody recognition and music perception, yet its neural underpinnings remain largely unknown to date. Using fMRI with high sensitivity and spatial resolution, we examined the contribution of multi-peak spectral tuning to the neural representation of pitch chroma in human auditory cortex in two experiments. In experiment 1, our estimation of population spectral tuning curves from the responses to natural sounds confirmed--with new data--our recent results on the existence of cortical ensemble responses finely tuned to multiple frequencies at one octave distance (Moerel et al., 2013). In experiment 2, we fitted a mathematical model consisting of a pitch chroma and height component to explain the measured fMRI responses to piano notes. This analysis revealed that the octave-tuned populations-but not other cortical populations-harbored a neural representation of musical notes according to their pitch chroma. These results indicate that responses of auditory cortical populations selectively tuned to multiple frequencies at one octave distance predict well the perceptual similarity of musical notes with the same chroma, beyond the physical (frequency) distance of notes.
Collapse
Affiliation(s)
- Michelle Moerel
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, 6229 EV, the Netherlands
| | - Roberta Santoro
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, 6229 EV, the Netherlands
| | - Essa Yacoub
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Elia Formisano
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, 6200 MD, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, 6229 EV, the Netherlands
| |
Collapse
|
45
|
Baumann S, Joly O, Rees A, Petkov CI, Sun L, Thiele A, Griffiths TD. The topography of frequency and time representation in primate auditory cortices. eLife 2015; 4. [PMID: 25590651 PMCID: PMC4398946 DOI: 10.7554/elife.03256] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Accepted: 01/14/2015] [Indexed: 11/13/2022] Open
Abstract
Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area.
Collapse
Affiliation(s)
- Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
46
|
Abstract
This chapter provides an overview of current invasive recording methodology and experimental paradigms used in the studies of human auditory cortex. Invasive recordings can be obtained from neurosurgical patients undergoing clinical electrophysiologic evaluation for medically refractory epilepsy or brain tumors. This provides a unique research opportunity to study the human auditory cortex with high resolution both in time (milliseconds) and space (millimeters) and to generate valuable information about its organization and function. A historic overview presents the development of the experimental approaches from the pioneering works of Wilder Penfield to modern day. Practical issues regarding research subject population, stimulus presentation, data collection, and analysis are discussed for acute (intraoperative) and chronic experiments. Illustrative examples are provided from experimental paradigms, including studies of spectrotemporal processing, functional connectivity, and functional lesioning in human auditory cortex.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
| | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
47
|
Su L, Zulfiqar I, Jamshed F, Fonteneau E, Marslen-Wilson W. Mapping tonotopic organization in human temporal cortex: representational similarity analysis in EMEG source space. Front Neurosci 2014; 8:368. [PMID: 25429257 PMCID: PMC4228977 DOI: 10.3389/fnins.2014.00368] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2014] [Accepted: 10/27/2014] [Indexed: 12/23/2022] Open
Abstract
A wide variety of evidence, from neurophysiology, neuroanatomy, and imaging studies in humans and animals, suggests that human auditory cortex is in part tonotopically organized. Here we present a new means of resolving this spatial organization using a combination of non-invasive observables (EEG, MEG, and MRI), model-based estimates of spectrotemporal patterns of neural activation, and multivariate pattern analysis. The method exploits both the fine-grained temporal patterning of auditory cortical responses and the millisecond scale temporal resolution of EEG and MEG. Participants listened to 400 English words while MEG and scalp EEG were measured simultaneously. We estimated the location of cortical sources using the MRI anatomically constrained minimum norm estimate (MNE) procedure. We then combined a form of multivariate pattern analysis (representational similarity analysis) with a spatiotemporal searchlight approach to successfully decode information about patterns of neuronal frequency preference and selectivity in bilateral superior temporal cortex. Observed frequency preferences in and around Heschl's gyrus matched current proposals for the organization of tonotopic gradients in primary acoustic cortex, while the distribution of narrow frequency selectivity similarly matched results from the fMRI literature. The spatial maps generated by this novel combination of techniques seem comparable to those that have emerged from fMRI or ECOG studies, and a considerable advance over earlier MEG results.
Collapse
Affiliation(s)
- Li Su
- Department of Psychiatry, University of Cambridge Cambridge, UK ; Department of Psychology, University of Cambridge Cambridge, UK
| | - Isma Zulfiqar
- Department of Psychology, University of Cambridge Cambridge, UK
| | - Fawad Jamshed
- Department of Psychology, University of Cambridge Cambridge, UK
| | | | - William Marslen-Wilson
- Department of Psychology, University of Cambridge Cambridge, UK ; MRC Cognition and Brain Sciences Unit Cambridge, UK
| |
Collapse
|
48
|
Joly O, Baumann S, Poirier C, Patterson RD, Thiele A, Griffiths TD. A perceptual pitch boundary in a non-human primate. Front Psychol 2014; 5:998. [PMID: 25309477 PMCID: PMC4163976 DOI: 10.3389/fpsyg.2014.00998] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 08/21/2014] [Indexed: 11/20/2022] Open
Abstract
Pitch is an auditory percept critical to the perception of music and speech, and for these harmonic sounds, pitch is closely related to the repetition rate of the acoustic wave. This paper reports a test of the assumption that non-human primates and especially rhesus monkeys perceive the pitch of these harmonic sounds much as humans do. A new procedure was developed to train macaques to discriminate the pitch of harmonic sounds and thereby demonstrate that the lower limit for pitch perception in macaques is close to 30 Hz, as it is in humans. Moreover, when the phases of successive harmonics are alternated to cause a pseudo-doubling of the repetition rate, the lower pitch boundary in macaques decreases substantially, as it does in humans. The results suggest that both species use neural firing times to discriminate pitch, at least for sounds with relatively low repetition rates.
Collapse
Affiliation(s)
- Olivier Joly
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK ; Department of Experimental Psychology, MRC Cognition and Brain Sciences Unit, University of Oxford Oxford, UK
| | - Simon Baumann
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Colline Poirier
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge UK
| | - Alexander Thiele
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Timothy D Griffiths
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK ; University College London London, UK
| |
Collapse
|
49
|
Moerel M, De Martino F, Formisano E. An anatomical and functional topography of human auditory cortical areas. Front Neurosci 2014; 8:225. [PMID: 25120426 PMCID: PMC4114190 DOI: 10.3389/fnins.2014.00225] [Citation(s) in RCA: 147] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Accepted: 07/08/2014] [Indexed: 12/22/2022] Open
Abstract
While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.
Collapse
Affiliation(s)
- Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands ; Maastricht Brain Imaging Center, Maastricht University Maastricht, Netherlands ; Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota Minneapolis, MN, USA
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands ; Maastricht Brain Imaging Center, Maastricht University Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands ; Maastricht Brain Imaging Center, Maastricht University Maastricht, Netherlands
| |
Collapse
|
50
|
Langers DRM, Krumbholz K, Bowtell RW, Hall DA. Neuroimaging paradigms for tonotopic mapping (I): the influence of sound stimulus type. Neuroimage 2014; 100:650-62. [PMID: 25069046 PMCID: PMC5548253 DOI: 10.1016/j.neuroimage.2014.07.044] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2014] [Revised: 07/18/2014] [Accepted: 07/21/2014] [Indexed: 11/16/2022] Open
Abstract
Although a consensus is emerging in the literature regarding the tonotopic organisation of auditory cortex in humans, previous studies employed a vast array of different neuroimaging protocols. In the present functional magnetic resonance imaging (fMRI) study, we made a systematic comparison between stimulus protocols involving jittered tone sequences with either a narrowband, broadband, or sweep character in order to evaluate their suitability for the purpose of tonotopic mapping. Data-driven analysis techniques were used to identify cortical maps related to sound-evoked activation and tonotopic frequency tuning. Principal component analysis (PCA) was used to extract the dominant response patterns in each of the three protocols separately, and generalised canonical correlation analysis (CCA) to assess the commonalities between protocols. Generally speaking, all three types of stimuli evoked similarly distributed response patterns and resulted in qualitatively similar tonotopic maps. However, quantitatively, we found that broadband stimuli are most efficient at evoking responses in auditory cortex, whereas narrowband and sweep stimuli offer the best sensitivity to differences in frequency tuning. Based on these results, we make several recommendations regarding optimal stimulus protocols, and conclude that an experimental design based on narrowband stimuli provides the best sensitivity to frequency-dependent responses to determine tonotopic maps. We forward that the resulting protocol is suitable to act as a localiser of tonotopic cortical fields in individuals, or to make quantitative comparisons between maps in dedicated tonotopic mapping studies.
Collapse
Affiliation(s)
- Dave R M Langers
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, University of Nottingham, Nottingham, UK; Otology and Hearing group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK.
| | | | - Richard W Bowtell
- Sir Peter Mansfield Magnetic Resonance Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Deborah A Hall
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, University of Nottingham, Nottingham, UK; Otology and Hearing group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|