1
|
Hakonen M, Dahmani L, Lankinen K, Ren J, Barbaro J, Blazejewska A, Cui W, Kotlarz P, Li M, Polimeni JR, Turpin T, Uluç I, Wang D, Liu H, Ahveninen J. Individual connectivity-based parcellations reflect functional properties of human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576475. [PMID: 38293021 PMCID: PMC10827228 DOI: 10.1101/2024.01.20.576475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neuroimaging studies of the functional organization of human auditory cortex have focused on group-level analyses to identify tendencies that represent the typical brain. Here, we mapped auditory areas of the human superior temporal cortex (STC) in 30 participants by combining functional network analysis and 1-mm isotropic resolution 7T functional magnetic resonance imaging (fMRI). Two resting-state fMRI sessions, and one or two auditory and audiovisual speech localizer sessions, were collected on 3-4 separate days. We generated a set of functional network-based parcellations from these data. Solutions with 4, 6, and 11 networks were selected for closer examination based on local maxima of Dice and Silhouette values. The resulting parcellation of auditory cortices showed high intraindividual reproducibility both between resting state sessions (Dice coefficient: 69-78%) and between resting state and task sessions (Dice coefficient: 62-73%). This demonstrates that auditory areas in STC can be reliably segmented into functional subareas. The interindividual variability was significantly larger than intraindividual variability (Dice coefficient: 57%-68%, p<0.001), indicating that the parcellations also captured meaningful interindividual variability. The individual-specific parcellations yielded the highest alignment with task response topographies, suggesting that individual variability in parcellations reflects individual variability in auditory function. Connectional homogeneity within networks was also highest for the individual-specific parcellations. Furthermore, the similarity in the functional parcellations was not explainable by the similarity of macroanatomical properties of auditory cortex. Our findings suggest that individual-level parcellations capture meaningful idiosyncrasies in auditory cortex organization.
Collapse
Affiliation(s)
- M Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - L Dahmani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - K Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - J Ren
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J Barbaro
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - A Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - W Cui
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - P Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - M Li
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - I Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - D Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - H Liu
- Division of Brain Sciences, Changping Laboratory, Beijing, China
- Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing, China
| | - J Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
McMullin MA, Kumar R, Higgins NC, Gygi B, Elhilali M, Snyder JS. Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception. Open Mind (Camb) 2024; 8:333-365. [PMID: 38571530 PMCID: PMC10990578 DOI: 10.1162/opmi_a_00131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 02/10/2024] [Indexed: 04/05/2024] Open
Abstract
Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.
Collapse
Affiliation(s)
| | - Rohit Kumar
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Nathan C. Higgins
- Department of Communication Sciences & Disorders, University of South Florida, Tampa, FL, USA
| | - Brian Gygi
- East Bay Institute for Research and Education, Martinez, CA, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV, USA
| |
Collapse
|
3
|
Brewer AA, Barton B. Cortical field maps across human sensory cortex. Front Comput Neurosci 2023; 17:1232005. [PMID: 38164408 PMCID: PMC10758003 DOI: 10.3389/fncom.2023.1232005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 11/07/2023] [Indexed: 01/03/2024] Open
Abstract
Cortical processing pathways for sensory information in the mammalian brain tend to be organized into topographical representations that encode various fundamental sensory dimensions. Numerous laboratories have now shown how these representations are organized into numerous cortical field maps (CMFs) across visual and auditory cortex, with each CFM supporting a specialized computation or set of computations that underlie the associated perceptual behaviors. An individual CFM is defined by two orthogonal topographical gradients that reflect two essential aspects of feature space for that sense. Multiple adjacent CFMs are then organized across visual and auditory cortex into macrostructural patterns termed cloverleaf clusters. CFMs within cloverleaf clusters are thought to share properties such as receptive field distribution, cortical magnification, and processing specialization. Recent measurements point to the likely existence of CFMs in the other senses, as well, with topographical representations of at least one sensory dimension demonstrated in somatosensory, gustatory, and possibly olfactory cortical pathways. Here we discuss the evidence for CFM and cloverleaf cluster organization across human sensory cortex as well as approaches used to identify such organizational patterns. Knowledge of how these topographical representations are organized across cortex provides us with insight into how our conscious perceptions are created from our basic sensory inputs. In addition, studying how these representations change during development, trauma, and disease serves as an important tool for developing improvements in clinical therapies and rehabilitation for sensory deficits.
Collapse
Affiliation(s)
- Alyssa A. Brewer
- mindSPACE Laboratory, Departments of Cognitive Sciences and Language Science (by Courtesy), Center for Hearing Research, University of California, Irvine, Irvine, CA, United States
| | - Brian Barton
- mindSPACE Laboratory, Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
4
|
Banks MI, Krause BM, Berger DG, Campbell DI, Boes AD, Bruss JE, Kovach CK, Kawasaki H, Steinschneider M, Nourski KV. Functional geometry of auditory cortical resting state networks derived from intracranial electrophysiology. PLoS Biol 2023; 21:e3002239. [PMID: 37651504 PMCID: PMC10499207 DOI: 10.1371/journal.pbio.3002239] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 09/13/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023] Open
Abstract
Understanding central auditory processing critically depends on defining underlying auditory cortical networks and their relationship to the rest of the brain. We addressed these questions using resting state functional connectivity derived from human intracranial electroencephalography. Mapping recording sites into a low-dimensional space where proximity represents functional similarity revealed a hierarchical organization. At a fine scale, a group of auditory cortical regions excluded several higher-order auditory areas and segregated maximally from the prefrontal cortex. On mesoscale, the proximity of limbic structures to the auditory cortex suggested a limbic stream that parallels the classically described ventral and dorsal auditory processing streams. Identities of global hubs in anterior temporal and cingulate cortex depended on frequency band, consistent with diverse roles in semantic and cognitive processing. On a macroscale, observed hemispheric asymmetries were not specific for speech and language networks. This approach can be applied to multivariate brain data with respect to development, behavior, and disorders.
Collapse
Affiliation(s)
- Matthew I. Banks
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Bryan M. Krause
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - D. Graham Berger
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Declan I. Campbell
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Aaron D. Boes
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Joel E. Bruss
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Christopher K. Kovach
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Mitchell Steinschneider
- Department of Neurology, Albert Einstein College of Medicine, New York, New York, United States of America
- Department of Neuroscience, Albert Einstein College of Medicine, New York, New York, United States of America
| | - Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, Iowa, United States of America
| |
Collapse
|
5
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
6
|
Poncet M, Ales JM. Estimating neural activity from visual areas using functionally defined EEG templates. Hum Brain Mapp 2023; 44:1846-1861. [PMID: 36655286 PMCID: PMC9980892 DOI: 10.1002/hbm.26188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 12/01/2022] [Accepted: 12/11/2022] [Indexed: 01/20/2023] Open
Abstract
Electroencephalography (EEG) is a common and inexpensive method to record neural activity in humans. However, it lacks spatial resolution making it difficult to determine which areas of the brain are responsible for the observed EEG response. Here we present a new easy-to-use method that relies on EEG topographical templates. Using MRI and fMRI scans of 50 participants, we simulated how the activity in each visual area appears on the scalp and averaged this signal to produce functionally defined EEG templates. Once created, these templates can be used to estimate how much each visual area contributes to the observed EEG activity. We tested this method on extensive simulations and on real data. The proposed procedure is as good as bespoke individual source localization methods, robust to a wide range of factors, and has several strengths. First, because it does not rely on individual brain scans, it is inexpensive and can be used on any EEG data set, past or present. Second, the results are readily interpretable in terms of functional brain regions and can be compared across neuroimaging techniques. Finally, this method is easy to understand, simple to use and expandable to other brain sources.
Collapse
Affiliation(s)
- Marlene Poncet
- School of Psychology and NeuroscienceUniversity of St AndrewsSt AndrewsUK
| | - Justin M. Ales
- School of Psychology and NeuroscienceUniversity of St AndrewsSt AndrewsUK
| |
Collapse
|
7
|
Forward entrainment: Psychophysics, neural correlates, and function. Psychon Bull Rev 2022:10.3758/s13423-022-02220-y. [DOI: 10.3758/s13423-022-02220-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2022] [Indexed: 12/04/2022]
Abstract
AbstractWe define forward entrainment as that part of behavioral or neural entrainment that outlasts the entraining stimulus. In this review, we examine conditions under which one may optimally observe forward entrainment. In Part 1, we review and evaluate studies that have observed forward entrainment using a variety of psychophysical methods (detection, discrimination, and reaction times), different target stimuli (tones, noise, and gaps), different entraining sequences (sinusoidal, rectangular, or sawtooth waveforms), a variety of physiological measures (MEG, EEG, ECoG, CSD), in different modalities (auditory and visual), across modalities (audiovisual and auditory-motor), and in different species. In Part 2, we describe those experimental conditions that place constraints on the magnitude of forward entrainment, including an evaluation of the effects of signal uncertainty and attention, temporal envelope complexity, signal-to-noise ratio (SNR), rhythmic rate, prior experience, and intersubject variability. In Part 3 we theorize on potential mechanisms and propose that forward entrainment may instantiate a dynamic auditory afterimage that lasts a fraction of a second to minimize prediction error in signal processing.
Collapse
|
8
|
In-vivo data-driven parcellation of Heschl's gyrus using structural connectivity. Sci Rep 2022; 12:11292. [PMID: 35788143 PMCID: PMC9253310 DOI: 10.1038/s41598-022-15083-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 06/17/2022] [Indexed: 12/05/2022] Open
Abstract
The human auditory cortex around Heschl’s gyrus (HG) exhibits diverging patterns across individuals owing to the heterogeneity of its substructures. In this study, we investigated the subregions of the human auditory cortex using data-driven machine-learning techniques at the individual level and assessed their structural and functional profiles. We studied an openly accessible large dataset of the Human Connectome Project and identified the subregions of the HG in humans using data-driven clustering techniques with individually calculated imaging features of cortical folding and structural connectivity information obtained via diffusion magnetic resonance imaging tractography. We characterized the structural and functional profiles of each HG subregion according to the cortical morphology, microstructure, and functional connectivity at rest. We found three subregions. The first subregion (HG1) occupied the central portion of HG, the second subregion (HG2) occupied the medial-posterior-superior part of HG, and the third subregion (HG3) occupied the lateral-anterior-inferior part of HG. The HG3 exhibited strong structural and functional connectivity to the association and paralimbic areas, and the HG1 exhibited a higher myelin density and larger cortical thickness than other subregions. A functional gradient analysis revealed a gradual axis expanding from the HG2 to the HG3. Our findings clarify the individually varying structural and functional organization of human HG subregions and provide insights into the substructures of the human auditory cortex.
Collapse
|
9
|
Reh J, Schmitz G, Hwang TH, Effenberg AO. Loudness affects motion: asymmetric volume of auditory feedback results in asymmetric gait in healthy young adults. BMC Musculoskelet Disord 2022; 23:586. [PMID: 35715757 PMCID: PMC9206330 DOI: 10.1186/s12891-022-05503-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 06/01/2022] [Indexed: 12/02/2022] Open
Abstract
Background The potential of auditory feedback for motor learning in the rehabilitation of various diseases has become apparent in recent years. However, since the volume of auditory feedback has played a minor role so far and its influence has hardly been considered, we investigate the volume effect of auditory feedback on gait pattern and gait direction and its interaction with pitch. Methods Thirty-two healthy young participants were randomly divided into two groups: Group 1 (n = 16) received a high pitch (150-250 Hz) auditory feedback; group 2 (n = 16) received a lower pitch (95-112 Hz) auditory feedback. The feedback consisted of a real-time sonification of the right and left foot ground contact. After an initial condition (no auditory feedback and full vision), both groups realized a 30-minute habituation period followed by a 30-minute asymmetry period. At any condition, the participants were asked to walk blindfolded and with auditory feedback towards a target at 15 m distance and were stopped 5 m before the target. Three different volume conditions were applied in random order during the habituation period: loud, normal, and quiet. In the subsequent asymmetry period, the three volume conditions baseline, right quiet and left quiet were applied in random order. Results In the habituation phase, the step width from the loud to the quiet condition showed a significant interaction of volume*pitch with a decrease at high pitch (group 1) and an increase at lower pitch (group 2) (group 1: loud 1.02 ± 0.310, quiet 0.98 ± 0.301; group 2: loud 0.95 ± 0.229, quiet 1.11 ± 0.298). In the asymmetry period, a significantly increased ground contact time on the side with reduced volume could be found (right quiet: left foot 0.988 ± 0.033, right foot 1.003 ± 0.040, left quiet: left foot 1.004 ± 0.036, right foot 1.002 ± 0.033). Conclusions Our results suggest that modifying the volume of auditory feedback can be an effective way to improve gait symmetry. This could facilitate gait therapy and rehabilitation of hemiparetic and arthroplasty patients, in particular if gait improvement based on verbal corrections and conscious motor control is limited.
Collapse
Affiliation(s)
- Julia Reh
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany.
| | - Gerd Schmitz
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany
| | - Tong-Hun Hwang
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany
| | - Alfred O Effenberg
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany.
| |
Collapse
|
10
|
van Ackooij M, Paul JM, van der Zwaag W, van der Stoep N, Harvey BM. Auditory timing-tuned neural responses in the human auditory cortices. Neuroimage 2022; 258:119366. [PMID: 35690255 DOI: 10.1016/j.neuroimage.2022.119366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 11/27/2022] Open
Abstract
Perception of sub-second auditory event timing supports multisensory integration, and speech and music perception and production. Neural populations tuned for the timing (duration and rate) of visual events were recently described in several human extrastriate visual areas. Here we ask whether the brain also contains neural populations tuned for auditory event timing, and whether these are shared with visual timing. Using 7T fMRI, we measured responses to white noise bursts of changing duration and rate. We analyzed these responses using neural response models describing different parametric relationships between event timing and neural response amplitude. This revealed auditory timing-tuned responses in the primary auditory cortex, and auditory association areas of the belt, parabelt and premotor cortex. While these areas also showed tonotopic tuning for auditory pitch, pitch and timing preferences were not consistently correlated. Auditory timing-tuned response functions differed between these areas, though without clear hierarchical integration of responses. The similarity of auditory and visual timing tuned responses, together with the lack of overlap between the areas showing these responses for each modality, suggests modality-specific responses to event timing are computed similarly but from different sensory inputs, and then transformed differently to suit the needs of each modality.
Collapse
Affiliation(s)
- Martijn van Ackooij
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands
| | - Jacob M Paul
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands; Melbourne School of Psychological Sciences, University of Melbourne, Redmond Barry Building, Parkville 3010, Victoria, Australia
| | | | - Nathan van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands
| | - Ben M Harvey
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands.
| |
Collapse
|
11
|
Norman-Haignere SV, Feather J, Boebinger D, Brunner P, Ritaccio A, McDermott JH, Schalk G, Kanwisher N. A neural population selective for song in human auditory cortex. Curr Biol 2022; 32:1470-1484.e12. [PMID: 35196507 PMCID: PMC9092957 DOI: 10.1016/j.cub.2022.01.069] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/26/2021] [Accepted: 01/24/2022] [Indexed: 12/18/2022]
Abstract
How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Institute, Columbia University, New York, NY, USA; HHMI Fellow of the Life Sciences Research Foundation, Chevy Chase, MD, USA; Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, ENS, PSL University, CNRS, Paris, France; Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, USA; Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, USA; Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, NY, USA; National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Neurosurgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Anthony Ritaccio
- Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Neurology, Mayo Clinic, Jacksonville, FL, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
12
|
Norman-Haignere SV, Long LK, Devinsky O, Doyle W, Irobunda I, Merricks EM, Feldstein NA, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Multiscale temporal integration organizes hierarchical computation in human auditory cortex. Nat Hum Behav 2022; 6:455-469. [PMID: 35145280 PMCID: PMC8957490 DOI: 10.1038/s41562-021-01261-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 11/18/2021] [Indexed: 01/11/2023]
Abstract
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,HHMI Postdoctoral Fellow of the Life Sciences Research Foundation
| | - Laura K. Long
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University
| | - Orrin Devinsky
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center
| | - Werner Doyle
- Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Neurosurgery, NYU Langone Medical Center
| | - Ifeoma Irobunda
- Department of Neurology, Columbia University Irving Medical Center
| | | | - Neil A. Feldstein
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | - Guy M. McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | | | - Adeen Flinker
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Biomedical Engineering, NYU Tandon School of Engineering
| | - Nima Mesgarani
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University,Department of Electrical Engineering, Columbia University
| |
Collapse
|
13
|
Ta D, Tu Y, Lu ZL, Wang Y. Quantitative characterization of the human retinotopic map based on quasiconformal mapping. Med Image Anal 2022; 75:102230. [PMID: 34666194 PMCID: PMC8678293 DOI: 10.1016/j.media.2021.102230] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 07/11/2021] [Accepted: 09/10/2021] [Indexed: 01/03/2023]
Abstract
The retinotopic map depicts the cortical neurons' response to visual stimuli on the retina and has contributed significantly to our understanding of human visual system. Although recent advances in high field functional magnetic resonance imaging (fMRI) have made it possible to generate the in vivo retinotopic map with great detail, quantifying the map remains challenging. Existing quantification methods do not preserve surface topology and often introduce large geometric distortions to the map. In this study, we developed a new framework based on computational conformal geometry and quasiconformal Teichmüller theory to quantify the retinotopic map. Specifically, we introduced a general pipeline, consisting of cortical surface conformal parameterization, surface-spline-based cortical activation signal smoothing, and vertex-wise Beltrami coefficient-based map description. After correcting most of the violations of the topological conditions, the result was a "Beltrami coefficient map" (BCM) that rigorously and completely characterizes the retinotopic map by quantifying the local quasiconformal mapping distortion at each visual field location. The BCM provided topological and fully reconstructable retinotopic maps. We successfully applied the new framework to analyze the V1 retinotopic maps from the Human Connectome Project (n=181), the largest state of the art retinotopy dataset currently available. With unprecedented precision, we found that the V1 retinotopic map was quasiconformal and the local mapping distortions were similar across observers. The new framework can be applied to other visual areas and retinotopic maps of individuals with and without eye diseases, and improve our understanding of visual cortical organization in normal and clinical populations.
Collapse
Affiliation(s)
- Duyan Ta
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Yanshuai Tu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, NY, USA; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Yalin Wang
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
14
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
15
|
Moerel M, Yacoub E, Gulban OF, Lage-Castellanos A, De Martino F. Using high spatial resolution fMRI to understand representation in the auditory network. Prog Neurobiol 2021; 207:101887. [PMID: 32745500 PMCID: PMC7854960 DOI: 10.1016/j.pneurobio.2020.101887] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 05/27/2020] [Accepted: 07/15/2020] [Indexed: 12/23/2022]
Abstract
Following rapid methodological advances, ultra-high field (UHF) functional and anatomical magnetic resonance imaging (MRI) has been repeatedly and successfully used for the investigation of the human auditory system in recent years. Here, we review this work and argue that UHF MRI is uniquely suited to shed light on how sounds are represented throughout the network of auditory brain regions. That is, the provided gain in spatial resolution at UHF can be used to study the functional role of the small subcortical auditory processing stages and details of cortical processing. Further, by combining high spatial resolution with the versatility of MRI contrasts, UHF MRI has the potential to localize the primary auditory cortex in individual hemispheres. This is a prerequisite to study how sound representation in higher-level auditory cortex evolves from that in early (primary) auditory cortex. Finally, the access to independent signals across auditory cortical depths, as afforded by UHF, may reveal the computations that underlie the emergence of an abstract, categorical sound representation based on low-level acoustic feature processing. Efforts on these research topics are underway. Here we discuss promises as well as challenges that come with studying these research questions using UHF MRI, and provide a future outlook.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands.
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| | - Omer Faruk Gulban
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Brain Innovation B.V., Maastricht, the Netherlands.
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Department of NeuroInformatics, Cuban Center for Neuroscience, Cuba.
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| |
Collapse
|
16
|
Fuglsang SA, Madsen KH, Puonti O, Hjortkjær J, Siebner HR. Mapping cortico-subcortical sensitivity to 4 Hz amplitude modulation depth in human auditory system with functional MRI. Neuroimage 2021; 246:118745. [PMID: 34808364 DOI: 10.1016/j.neuroimage.2021.118745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 10/19/2022] Open
Abstract
Temporal modulations in the envelope of acoustic waveforms at rates around 4 Hz constitute a strong acoustic cue in speech and other natural sounds. It is often assumed that the ascending auditory pathway is increasingly sensitive to slow amplitude modulation (AM), but sensitivity to AM is typically considered separately for individual stages of the auditory system. Here, we used blood oxygen level dependent (BOLD) fMRI in twenty human subjects (10 male) to measure sensitivity of regional neural activity in the auditory system to 4 Hz temporal modulations. Participants were exposed to AM noise stimuli varying parametrically in modulation depth to characterize modulation-depth effects on BOLD responses. A Bayesian hierarchical modeling approach was used to model potentially nonlinear relations between AM depth and group-level BOLD responses in auditory regions of interest (ROIs). Sound stimulation activated the auditory brainstem and cortex structures in single subjects. BOLD responses to noise exposure in core and belt auditory cortices scaled positively with modulation depth. This finding was corroborated by whole-brain cluster-level inference. Sensitivity to AM depth variations was particularly pronounced in the Heschl's gyrus but also found in higher-order auditory cortical regions. None of the sound-responsive subcortical auditory structures showed a BOLD response profile that reflected the parametric variation in AM depth. The results are compatible with the notion that early auditory cortical regions play a key role in processing low-rate modulation content of sounds in the human auditory system.
Collapse
Affiliation(s)
- Søren A Fuglsang
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark.
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Department of Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
17
|
Hamilton LS, Oganian Y, Hall J, Chang EF. Parallel and distributed encoding of speech across human auditory cortex. Cell 2021; 184:4626-4639.e13. [PMID: 34411517 DOI: 10.1016/j.cell.2021.07.019] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 02/11/2021] [Accepted: 07/19/2021] [Indexed: 12/27/2022]
Abstract
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Collapse
Affiliation(s)
- Liberty S Hamilton
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Yulia Oganian
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Jeffery Hall
- Department of Neurology and Neurosurgery, McGill University Montreal Neurological Institute, Montreal, QC, H3A 2B4, Canada
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
18
|
Khalighinejad B, Patel P, Herrero JL, Bickel S, Mehta AD, Mesgarani N. Functional characterization of human Heschl's gyrus in response to natural speech. Neuroimage 2021; 235:118003. [PMID: 33789135 PMCID: PMC8608271 DOI: 10.1016/j.neuroimage.2021.118003] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 03/23/2021] [Accepted: 03/25/2021] [Indexed: 01/11/2023] Open
Abstract
Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.
Collapse
Affiliation(s)
- Bahar Khalighinejad
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Prachi Patel
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Jose L. Herrero
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Ashesh D. Mehta
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States,Corresponding author at: Department of Electrical Engineering, Columbia University, New York, NY, United States. (B. Khalighinejad), (P. Patel), (J.L. Herrero), (S. Bickel), (A.D. Mehta), (N. Mesgarani)
| |
Collapse
|
19
|
Tu Y, Ta D, Lu ZL, Wang Y. Optimizing Visual Cortex Parameterization with Error-Tolerant Teichmüller Map in Retinotopic Mapping. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12267:218-227. [PMID: 34291236 PMCID: PMC8291100 DOI: 10.1007/978-3-030-59728-3_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2023]
Abstract
The mapping between the visual input on the retina to the cortical surface, i.e., retinotopic mapping, is an important topic in vision science and neuroscience. Human retinotopic mapping can be revealed by analyzing cortex functional magnetic resonance imaging (fMRI) signals when the subject is under specific visual stimuli. Conventional methods process, smooth, and analyze the retinotopic mapping based on the parametrization of the (partial) cortical surface. However, the retinotopic maps generated by this approach frequently contradict neuropsychology results. To address this problem, we propose an integrated approach that parameterizes the cortical surface, such that the parametric coordinates linearly relates the visual coordinate. The proposed method helps the smoothing of noisy retinotopic maps and obtains neurophysiological insights in human vision systems. One key element of the approach is the Error-Tolerant Teichmüller Map, which uniforms the angle distortion and maximizes the alignments to self-contradicting landmarks. We validated our overall approach with synthetic and real retinotopic mapping datasets. The experimental results show the proposed approach is superior in accuracy and compatibility. Although we focus on retinotopic mapping, the proposed framework is general and can be applied to process other human sensory maps.
Collapse
Affiliation(s)
- Yanshuai Tu
- Arizona State University, Tempe AZ 85201, USA
| | - Duyan Ta
- Arizona State University, Tempe AZ 85201, USA
| | - Zhong-Lin Lu
- New York University, New York, NY
- NYU Shanghai, Shanghai, China
| | - Yalin Wang
- Arizona State University, Tempe AZ 85201, USA
| |
Collapse
|
20
|
The rhythm of attention: Perceptual modulation via rhythmic entrainment is lowpass and attention mediated. Atten Percept Psychophys 2020; 82:3558-3570. [PMID: 32686065 DOI: 10.3758/s13414-020-02095-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Modulation patterns are known to carry critical predictive cues to signal detection in complex acoustic environments. The current study investigated the persistence of masker modulation effects on postmodulation detection of probe signals. Hickok, Farahbod, and Saberi (Psychological Science, 26, 1006-1013, 2015) demonstrated that thresholds for a tone pulse in stationary noise follow a predictable periodic pattern when preceded by a 3-Hz amplitude modulated masker. They found entrainment of detection patterns to the modulation envelope lasting for approximately two cycles after termination of modulation. The current study extends these results to a wide range of modulation rates by mapping the temporal modulation transfer function for persistent modulatory effects. We found significant entrainment to modulation rates of 2 and 3 Hz, a weaker effect at 5 Hz, and no entrainment at higher rates (8 to 32 Hz). The effect seems critically dependent on attentional mechanisms, requiring temporal and level uncertainty of the probe signal. Our findings suggest that the persistence of modulatory effects on signal detection is lowpass in nature and attention based.
Collapse
|
21
|
Sohoglu E, Kumar S, Chait M, Griffiths TD. Multivoxel codes for representing and integrating acoustic features in human cortex. Neuroimage 2020; 217:116661. [PMID: 32081785 PMCID: PMC7339141 DOI: 10.1016/j.neuroimage.2020.116661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/13/2020] [Accepted: 02/15/2020] [Indexed: 10/25/2022] Open
Abstract
Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.
Collapse
Affiliation(s)
- Ediz Sohoglu
- School of Psychology, University of Sussex, Brighton, BN1 9QH, United Kingdom.
| | - Sukhbinder Kumar
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Maria Chait
- Ear Institute, University College London, London, United Kingdom
| | - Timothy D Griffiths
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| |
Collapse
|
22
|
Chien VSC, Maess B, Knösche TR. A generic deviance detection principle for cortical On/Off responses, omission response, and mismatch negativity. BIOLOGICAL CYBERNETICS 2019; 113:475-494. [PMID: 31428855 PMCID: PMC6848254 DOI: 10.1007/s00422-019-00804-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 08/07/2019] [Indexed: 05/04/2023]
Abstract
Neural responses to sudden changes can be observed in many parts of the sensory pathways at different organizational levels. For example, deviants that violate regularity at various levels of abstraction can be observed as simple On/Off responses of individual neurons or as cumulative responses of neural populations. The cortical deviance-related responses supporting different functionalities (e.g., gap detection, chunking, etc.) seem unlikely to arise from different function-specific neural circuits, given the relatively uniform and self-similar wiring patterns across cortical areas and spatial scales. Additionally, reciprocal wiring patterns (with heterogeneous combinations of excitatory and inhibitory connections) in the cortex naturally speak in favor of a generic deviance detection principle. Based on this concept, we propose a network model consisting of reciprocally coupled neural masses as a blueprint of a universal change detector. Simulation examples reproduce properties of cortical deviance-related responses including the On/Off responses, the omitted-stimulus response (OSR), and the mismatch negativity (MMN). We propose that the emergence of change detectors relies on the involvement of disinhibition. An analysis of network connection settings further suggests a supportive effect of synaptic adaptation and a destructive effect of N-methyl-D-aspartate receptor (NMDA-r) antagonists on change detection. We conclude that the nature of cortical reciprocal wiring gives rise to a whole range of local change detectors supporting the notion of a generic deviance detection principle. Several testable predictions are provided based on the network model. Notably, we predict that the NMDA-r antagonists would generally dampen the cortical Off response, the cortical OSR, and the MMN.
Collapse
Affiliation(s)
- Vincent S. C. Chien
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, Leipzig, Germany
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, Leipzig, Germany
| | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, Leipzig, Germany
| |
Collapse
|
23
|
Stigliani A, Jeska B, Grill-Spector K. Differential sustained and transient temporal processing across visual streams. PLoS Comput Biol 2019; 15:e1007011. [PMID: 31145723 PMCID: PMC6583966 DOI: 10.1371/journal.pcbi.1007011] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Revised: 06/19/2019] [Accepted: 04/07/2019] [Indexed: 11/24/2022] Open
Abstract
How do high-level visual regions process the temporal aspects of our visual experience? While the temporal sensitivity of early visual cortex has been studied with fMRI in humans, temporal processing in high-level visual cortex is largely unknown. By modeling neural responses with millisecond precision in separate sustained and transient channels, and introducing a flexible encoding framework that captures differences in neural temporal integration time windows and response nonlinearities, we predict fMRI responses across visual cortex for stimuli ranging from 33 ms to 20 s. Using this innovative approach, we discovered that lateral category-selective regions respond to visual transients associated with stimulus onsets and offsets but not sustained visual information. Thus, lateral category-selective regions compute moment-to-moment visual transitions, but not stable features of the visual input. In contrast, ventral category-selective regions process both sustained and transient components of the visual input. Our model revealed that sustained channel responses to prolonged stimuli exhibit adaptation, whereas transient channel responses to stimulus offsets are surprisingly larger than for stimulus onsets. This large offset transient response may reflect a memory trace of the stimulus when it is no longer visible, whereas the onset transient response may reflect rapid processing of new items. Together, these findings reveal previously unconsidered, fundamental temporal mechanisms that distinguish visual streams in the human brain. Importantly, our results underscore the promise of modeling brain responses with millisecond precision to understand the underlying neural computations. How does the brain encode the timing of our visual experience? Using functional magnetic resonance imaging (fMRI) and a generative temporal model with millisecond resolution, we discovered that visual regions in the lateral and ventral processing streams fundamentally differ in their temporal processing of the visual input. Regions in lateral temporal cortex process visual transients associated with the beginning and ending of the stimulus, but not its stable aspects. That is, lateral regions appear to compute moment-to-moment changes in the visual input. In contrast, regions in ventral temporal cortex process both stable and transient components of the visual input, even as the response to the former exhibits adaptation. Surprisingly, the model predicts that in ventral regions responses to stimulus endings are larger than beginnings. We suggest that ending responses may reflect a memory trace of the stimulus, when it is no longer visible, and the beginning responses may reflect processing of new inputs. Together, these findings (i) reveal a fundamental temporal mechanism that distinguishes visual streams and (ii) highlight both the importance and utility of modeling brain responses with millisecond precision to understand the temporal dynamics of neural computations in the human brain.
Collapse
Affiliation(s)
- Anthony Stigliani
- Psychology Department, Stanford University, Stanford, California, United States of America
| | - Brianna Jeska
- Psychology Department, Stanford University, Stanford, California, United States of America
| | - Kalanit Grill-Spector
- Psychology Department, Stanford University, Stanford, California, United States of America
- Stanford Neurosciences Institute, Stanford University, Stanford, California, United States of America
- * E-mail:
| |
Collapse
|
24
|
Early Blindness Shapes Cortical Representations of Auditory Frequency within Auditory Cortex. J Neurosci 2019; 39:5143-5152. [PMID: 31010853 DOI: 10.1523/jneurosci.2896-18.2019] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 04/03/2019] [Accepted: 04/04/2019] [Indexed: 12/29/2022] Open
Abstract
Early loss of vision is classically linked to large-scale cross-modal plasticity within occipital cortex. Much less is known about the effects of early blindness on auditory cortex. Here, we examine the effects of early blindness on the cortical representation of auditory frequency within human primary and secondary auditory areas using fMRI. We observe that 4 individuals with early blindness (2 females), and a group of 5 individuals with anophthalmia (1 female), a condition in which both eyes fail to develop, have lower response amplitudes and narrower voxelwise tuning bandwidths compared with a group of typically sighted individuals. These results provide some of the first evidence in human participants for compensatory plasticity within nondeprived sensory areas as a result of sensory loss.SIGNIFICANCE STATEMENT Early blindness has been linked to enhanced perception of the auditory world, including auditory localization and pitch perception. Here we used fMRI to compare neural responses with auditory stimuli within auditory cortex across sighted, early blind, and anophthalmic individuals, in whom both eyes fail to develop. We find more refined frequency tuning in blind subjects, providing some of the first evidence in human subjects for compensation within nondeprived primary sensory areas as a result of blindness early in life.
Collapse
|
25
|
Moerel M, De Martino F, Uğurbil K, Yacoub E, Formisano E. Processing complexity increases in superficial layers of human primary auditory cortex. Sci Rep 2019; 9:5502. [PMID: 30940888 PMCID: PMC6445291 DOI: 10.1038/s41598-019-41965-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 03/20/2019] [Indexed: 11/29/2022] Open
Abstract
The layers of the neocortex each have a unique anatomical connectivity and functional role. Their exploration in the human brain, however, has been severely restricted by the limited spatial resolution of non-invasive measurement techniques. Here, we exploit the sensitivity and specificity of ultra-high field fMRI at 7 Tesla to investigate responses to natural sounds at deep, middle, and superficial cortical depths of the human auditory cortex. Specifically, we compare the performance of computational models that represent different hypotheses on sound processing inside and outside the primary auditory cortex (PAC). We observe that while BOLD responses in deep and middle PAC layers are equally well represented by a simple frequency model and a more complex spectrotemporal modulation model, responses in superficial PAC are better represented by the more complex model. This indicates an increase in processing complexity in superficial PAC, which remains present throughout cortical depths in the non-primary auditory cortex. These results suggest that a relevant transformation in sound processing takes place between the thalamo-recipient middle PAC layers and superficial PAC. This transformation may be a first computational step towards sound abstraction and perception, serving to form an increasingly more complex representation of the physical input.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands.
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
- Maastricht Brain Imaging Center (MBIC), Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands.
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA.
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 6th Street SE, Minneapolis, MN, 55455, USA
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, The Netherlands
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), Oxfordlaan 55, 6229 EV, Maastricht, The Netherlands
| |
Collapse
|
26
|
Flinker A, Doyle WK, Mehta AD, Devinsky O, Poeppel D. Spectrotemporal modulation provides a unifying framework for auditory cortical asymmetries. Nat Hum Behav 2019; 3:393-405. [PMID: 30971792 PMCID: PMC6650286 DOI: 10.1038/s41562-019-0548-z] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 01/28/2019] [Indexed: 11/29/2022]
Abstract
The principles underlying functional asymmetries in cortex remain debated. For example, it is accepted that speech is processed bilaterally in auditory cortex, but a left hemisphere dominance emerges when the input is interpreted linguistically. The mechanisms, however, are contested: what sound features or processing principles underlie laterality? Recent findings across species (humans, canines, bats) provide converging evidence that spectrotemporal sound features drive asymmetrical responses. Typically, accounts invoke models wherein the hemispheres differ in time-frequency resolution or integration window size. We develop a framework that builds on and unifies prevailing models, using spectrotemporal modulation space. Using signal processing techniques motivated by neural responses, we test this approach employing behavioral and neurophysiological measures. We show how psychophysical judgments align with spectrotemporal modulations and then characterize the neural sensitivities to temporal and spectral modulations. We demonstrate differential contributions from both hemispheres, with a left lateralization for temporal modulations and a weaker right lateralization for spectral modulations. We argue that representations in the modulation domain provide a more mechanistic basis to account for lateralization in auditory cortex.
Collapse
Affiliation(s)
- Adeen Flinker
- Department of Psychology, New York University, New York, NY, USA. .,Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Werner K Doyle
- Department of Neurosurgery, New York University School of Medicine, New York, NY, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Orrin Devinsky
- Department of Neurology, New York University School of Medicine, New York, NY, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA.,Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
27
|
Larger Auditory Cortical Area and Broader Frequency Tuning Underlie Absolute Pitch. J Neurosci 2019; 39:2930-2937. [PMID: 30745420 DOI: 10.1523/jneurosci.1532-18.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 01/08/2019] [Accepted: 01/12/2019] [Indexed: 12/29/2022] Open
Abstract
Absolute pitch (AP), the ability of some musicians to precisely identify and name musical tones in isolation, is associated with a number of gross morphological changes in the brain, but the fundamental neural mechanisms underlying this ability have not been clear. We presented a series of logarithmic frequency sweeps to age- and sex-matched groups of musicians with or without AP and controls without musical training. We used fMRI and population receptive field (pRF) modeling to measure the responses in the auditory cortex in 61 human subjects. The tuning response of each fMRI voxel was characterized as Gaussian, with independent center frequency and bandwidth parameters. We identified three distinct tonotopic maps, corresponding to primary (A1), rostral (R), and rostral-temporal (RT) regions of auditory cortex. We initially hypothesized that AP abilities might manifest in sharper tuning in the auditory cortex. However, we observed that AP subjects had larger cortical area, with the increased area primarily devoted to broader frequency tuning. We observed anatomically that A1, R and RT were significantly larger in AP musicians than in non-AP musicians or control subjects, which did not differ significantly from each other. The increased cortical area in AP in areas A1 and R were primarily low frequency and broadly tuned, whereas the distribution of responses in area RT did not differ significantly. We conclude that AP abilities are associated with increased early auditory cortical area devoted to broad-frequency tuning and likely exploit increased ensemble encoding.SIGNIFICANCE STATEMENT Absolute pitch (AP), the ability of some musicians to precisely identify and name musical tones in isolation, is associated with a number of gross morphological changes in the brain, but the fundamental neural mechanisms have not been clear. Our study shows that AP musicians have significantly larger volume in early auditory cortex than non-AP musicians and non-musician controls and that this increased volume is primarily devoted to broad-frequency tuning. We conclude that AP musicians are likely able to exploit increased ensemble representations to encode and identify frequency.
Collapse
|
28
|
Norman-Haignere SV, McDermott JH. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biol 2018; 16:e2005127. [PMID: 30507943 PMCID: PMC6292651 DOI: 10.1371/journal.pbio.2005127] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/13/2018] [Accepted: 11/08/2018] [Indexed: 11/19/2022] Open
Abstract
A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.
Collapse
Affiliation(s)
- Sam V. Norman-Haignere
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Zuckerman Institute of Mind, Brain and Behavior, Columbia University, New York, New York, United States of America
- Laboratoire des Sytèmes Perceptifs, Département d’Études Cognitives, ENS, PSL University, CNRS, Paris France
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
29
|
Venezia JH, Thurman SM, Richards VM, Hickok G. Hierarchy of speech-driven spectrotemporal receptive fields in human auditory cortex. Neuroimage 2018; 186:647-666. [PMID: 30500424 DOI: 10.1016/j.neuroimage.2018.11.049] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Revised: 10/11/2018] [Accepted: 11/26/2018] [Indexed: 12/22/2022] Open
Abstract
Existing data indicate that cortical speech processing is hierarchically organized. Numerous studies have shown that early auditory areas encode fine acoustic details while later areas encode abstracted speech patterns. However, it remains unclear precisely what speech information is encoded across these hierarchical levels. Estimation of speech-driven spectrotemporal receptive fields (STRFs) provides a means to explore cortical speech processing in terms of acoustic or linguistic information associated with characteristic spectrotemporal patterns. Here, we estimate STRFs from cortical responses to continuous speech in fMRI. Using a novel approach based on filtering randomly-selected spectrotemporal modulations (STMs) from aurally-presented sentences, STRFs were estimated for a group of listeners and categorized using a data-driven clustering algorithm. 'Behavioral STRFs' highlighting STMs crucial for speech recognition were derived from intelligibility judgments. Clustering revealed that STRFs in the supratemporal plane represented a broad range of STMs, while STRFs in the lateral temporal lobe represented circumscribed STM patterns important to intelligibility. Detailed analysis recovered a bilateral organization with posterior-lateral regions preferentially processing STMs associated with phonological information and anterior-lateral regions preferentially processing STMs associated with word- and phrase-level information. Regions in lateral Heschl's gyrus preferentially processed STMs associated with vocalic information (pitch).
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, USA; Dept. of Otolaryngology, School of Medicine, Loma Linda University, Loma Linda, CA, USA.
| | | | - Virginia M Richards
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| | - Gregory Hickok
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
30
|
Erb J, Armendariz M, De Martino F, Goebel R, Vanduffel W, Formisano E. Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex. Cereb Cortex 2018; 29:3636-3650. [DOI: 10.1093/cercor/bhy243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 08/08/2018] [Accepted: 09/05/2018] [Indexed: 01/01/2023] Open
Abstract
Abstract
Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.
Collapse
Affiliation(s)
- Julia Erb
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Wim Vanduffel
- Laboratorium voor Neuro-en Psychofysiologie, KU Leuven, Leuven, Belgium
- MGH Martinos Center, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Leuven Brain Institute, Leuven, Belgium
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Maastricht Center for Systems Biology (MaCSBio), MD Maastricht, The Netherlands
| |
Collapse
|
31
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
32
|
Moerel M, De Martino F, Uğurbil K, Formisano E, Yacoub E. Evaluating the Columnar Stability of Acoustic Processing in the Human Auditory Cortex. J Neurosci 2018; 38:7822-7832. [PMID: 30185539 PMCID: PMC6125808 DOI: 10.1523/jneurosci.3576-17.2018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 07/10/2018] [Accepted: 07/11/2018] [Indexed: 12/27/2022] Open
Abstract
Using ultra-high field fMRI, we explored the cortical depth-dependent stability of acoustic feature preference in human auditory cortex. We collected responses from human auditory cortex (subjects from either sex) to a large number of natural sounds at submillimeter spatial resolution, and observed that these responses were well explained by a model that assumes neuronal population tuning to frequency-specific spectrotemporal modulations. We observed a relatively stable (columnar) tuning to frequency and temporal modulations. However, spectral modulation tuning was variable throughout the cortical depth. This difference in columnar stability between feature maps could not be explained by a difference in map smoothness, as the preference along the cortical sheet varied in a similar manner for the different feature maps. Furthermore, tuning to all three features was more columnar in primary than nonprimary auditory cortex. The observed overall lack of overlapping columnar regions across acoustic feature maps suggests, especially for primary auditory cortex, a coding strategy in which across cortical depths tuning to some features is kept stable, whereas tuning to other features systematically varies.SIGNIFICANCE STATEMENT In the human auditory cortex, sound aspects are processed in large-scale maps. Invasive animal studies show that an additional processing organization may be implemented orthogonal to the cortical sheet (i.e., in the columnar direction), but it is unknown whether observed organizational principles apply to the human auditory cortex. Combining ultra-high field fMRI with natural sounds, we explore the columnar organization of various sound aspects. Our results suggest that the human auditory cortex contains a modular coding strategy, where, for each module, several sound aspects act as an anchor along which computations are performed while the processing of another sound aspect undergoes a transformation. This strategy may serve to optimally represent the content of our complex acoustic natural environment.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology and
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, 6200 MD Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands, and
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, 6200 MD Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands, and
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota 55455
| | - Elia Formisano
- Maastricht Centre for Systems Biology and
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, 6200 MD Maastricht University, Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands, and
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
33
|
Development differentially sculpts receptive fields across early and high-level human visual cortex. Nat Commun 2018; 9:788. [PMID: 29476135 PMCID: PMC5824941 DOI: 10.1038/s41467-018-03166-3] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 01/23/2018] [Indexed: 11/22/2022] Open
Abstract
Receptive fields (RFs) processing information in restricted parts of the visual field are a key property of visual system neurons. However, how RFs develop in humans is unknown. Using fMRI and population receptive field (pRF) modeling in children and adults, we determine where and how pRFs develop across the ventral visual stream. Here we report that pRF properties in visual field maps, from the first visual area, V1, through the first ventro-occipital area, VO1, are adult-like by age 5. However, pRF properties in face-selective and character-selective regions develop into adulthood, increasing the foveal coverage bias for faces in the right hemisphere and words in the left hemisphere. Eye-tracking indicates that pRF changes are related to changing fixation patterns on words and faces across development. These findings suggest a link between face and word viewing behavior and the differential development of pRFs across visual cortex, potentially due to competition on foveal coverage. Population receptive fields (pRFs) in the visual system are key information-processors, but how they develop is unknown. Here, authors use fMRI and pRF modeling in children and adults to show that in the ventral stream only pRFs in face- and word-selective regions continue to develop, mirroring changes in viewing behavior.
Collapse
|
34
|
Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing Tone Sequences from Functional Magnetic Resonance Imaging Blood-Oxygen Level Dependent Responses within Human Primary Auditory Cortex. Front Psychol 2017; 8:1983. [PMID: 29184522 PMCID: PMC5694557 DOI: 10.3389/fpsyg.2017.01983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/30/2017] [Indexed: 01/12/2023] Open
Abstract
Here we show that, using functional magnetic resonance imaging (fMRI) blood-oxygen level dependent (BOLD) responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”). By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Collapse
Affiliation(s)
- Kelly H Chang
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Jessica M Thomas
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA, United States
| |
Collapse
|
35
|
Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture. J Neurosci 2017; 37:12187-12201. [PMID: 29109238 PMCID: PMC5729191 DOI: 10.1523/jneurosci.1436-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 10/04/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.
Collapse
|
36
|
Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss. Hear Res 2017; 355:81-96. [DOI: 10.1016/j.heares.2017.09.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 07/28/2017] [Accepted: 09/23/2017] [Indexed: 01/09/2023]
|
37
|
Amplitude modulation rate dependent topographic organization of the auditory steady-state response in human auditory cortex. Hear Res 2017; 354:102-108. [PMID: 28917446 DOI: 10.1016/j.heares.2017.09.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Revised: 08/06/2017] [Accepted: 09/08/2017] [Indexed: 11/22/2022]
Abstract
Periodic modulations of an acoustic feature, such as amplitude over a certain frequency range, leads to phase locking of neural responses to the envelope of the modulation. Using electrophysiological methods this neural activity pattern, also called the auditory steady-state response (aSSR), is visible following frequency transformation of the evoked response as a clear spectral peak at the modulation frequency. Despite several studies employing the aSSR that show, for example, strongest responses for ∼40 Hz and an overall right-hemispheric dominance, it has not been investigated so far to what extent within auditory cortex different modulation frequencies elicit aSSRs at a homogenous source or whether the localization of the aSSR is topographically organized in a systematic manner. The latter would be suggested by previous neuroimaging works in monkeys and humans showing a periodotopic organization within and across distinct auditory fields. However, the sluggishness of the signal from these neuroimaging works prohibit inferences with regards to the fine-temporal features of the neural response. In the present study, we employed amplitude-modulated (AM) sounds over a range between 4 and 85 Hz to elicit aSSRs while recording brain activity via magnetoencephalography (MEG). Using beamforming and a fine spatially resolved grid restricted to auditory cortical processing regions, our study revealed a topographic representation of the aSSR that depends on AM rate, in particular in the medial-lateral (bilateral) and posterior-anterior (right auditory cortex) direction. In summary, our findings confirm previous studies that showing different AM rates to elicit maximal response in distinct neural populations. They extend these findings however by also showing that these respective neural ensembles in auditory cortex actually phase lock their activity over a wide modulation frequency range.
Collapse
|
38
|
Santoro R, Moerel M, De Martino F, Valente G, Ugurbil K, Yacoub E, Formisano E. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns. Proc Natl Acad Sci U S A 2017; 114:4799-4804. [PMID: 28420788 PMCID: PMC5422795 DOI: 10.1073/pnas.1617622114] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2-4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice).
Collapse
Affiliation(s)
- Roberta Santoro
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
- Brain and Language Laboratory, Department of Clinical Neuroscience, University Medical School, University of Geneva, CH-1211 Geneva, Switzerland
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455
- Maastricht Centre for Systems Biology, Maastricht University, 6200 MD Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands;
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University, 6200 MD Maastricht, The Netherlands
| |
Collapse
|
39
|
Barton B, Brewer AA. Visual Field Map Clusters in High-Order Visual Processing: Organization of V3A/V3B and a New Cloverleaf Cluster in the Posterior Superior Temporal Sulcus. Front Integr Neurosci 2017; 11:4. [PMID: 28293182 PMCID: PMC5329644 DOI: 10.3389/fnint.2017.00004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2016] [Accepted: 02/10/2017] [Indexed: 11/13/2022] Open
Abstract
The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2-8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1-8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing.
Collapse
Affiliation(s)
- Brian Barton
- Department of Cognitive Sciences, University of California, Irvine, Irvine CA, USA
| | - Alyssa A Brewer
- Department of Cognitive Sciences, University of California, Irvine, IrvineCA, USA; Department of Linguistics, University of California, Irvine, IrvineCA, USA; Center for Hearing Research, University of California, Irvine, IrvineCA, USA
| |
Collapse
|
40
|
Ding N, Patel AD, Chen L, Butler H, Luo C, Poeppel D. Temporal modulations in speech and music. Neurosci Biobehav Rev 2017; 81:181-187. [PMID: 28212857 DOI: 10.1016/j.neubiorev.2017.02.011] [Citation(s) in RCA: 226] [Impact Index Per Article: 32.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 02/09/2017] [Accepted: 02/10/2017] [Indexed: 10/20/2022]
Abstract
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.
Collapse
Affiliation(s)
- Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, China; Department of Psychology, New York University, New York, NY, United States; Interdisciplinary Center for Social Sciences, Zhejiang University, China; Neuro and Behavior EconLab, Zhejiang University of Finance and Economics, China.
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, MA, United States; Azrieli Program in Brain, Mind, & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - Lin Chen
- Department of Psychology, New York University, New York, NY, United States; College of Biomedical Engineering and Instrument Sciences, Zhejiang University, China
| | - Henry Butler
- Department of Psychology, Tufts University, Medford, MA, United States
| | - Cheng Luo
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, China
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, United States; Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
41
|
Ortiz-Rios M, Azevedo FAC, Kuśmierek P, Balla DZ, Munk MH, Keliris GA, Logothetis NK, Rauschecker JP. Widespread and Opponent fMRI Signals Represent Sound Location in Macaque Auditory Cortex. Neuron 2017; 93:971-983.e4. [PMID: 28190642 DOI: 10.1016/j.neuron.2017.01.013] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 12/05/2016] [Accepted: 01/15/2017] [Indexed: 11/15/2022]
Abstract
In primates, posterior auditory cortical areas are thought to be part of a dorsal auditory pathway that processes spatial information. But how posterior (and other) auditory areas represent acoustic space remains a matter of debate. Here we provide new evidence based on functional magnetic resonance imaging (fMRI) of the macaque indicating that space is predominantly represented by a distributed hemifield code rather than by a local spatial topography. Hemifield tuning in cortical and subcortical regions emerges from an opponent hemispheric pattern of activation and deactivation that depends on the availability of interaural delay cues. Importantly, these opponent signals allow responses in posterior regions to segregate space similarly to a hemifield code representation. Taken together, our results reconcile seemingly contradictory views by showing that the representation of space follows closely a hemifield code and suggest that enhanced posterior-dorsal spatial specificity in primates might emerge from this form of coding.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany; Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute of Neuroscience, Henry Welcome Building, Medical School, Framlington Place, Newcastle upon Tyne, NE2 4HH, UK.
| | - Frederico A C Azevedo
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA
| | - Dávid Z Balla
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany
| | - Matthias H Munk
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Department of Systems Neurophysiology, Fachbereich Biologie, Technische Universität Darmstadt, Schnittspahnstraße 10, 64287, Darmstadt, Germany
| | - Georgios A Keliris
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Bio-Imaging Lab, Department of Biomedical Sciences, University of Antwerp, Wilrijk, 2610, Belgium
| | - Nikos K Logothetis
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Division of Imaging Science and Biomedical Engineering, University of Manchester, Manchester, M13 9PL, UK
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute for Advanced Study of Technische Universität München, Lichtenbergstraße 2 a, 85748 Garching, Germany
| |
Collapse
|
42
|
Gardumi A, Ivanov D, Havlicek M, Formisano E, Uludağ K. Tonotopic maps in human auditory cortex using arterial spin labeling. Hum Brain Mapp 2016; 38:1140-1154. [PMID: 27790786 PMCID: PMC5324648 DOI: 10.1002/hbm.23444] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Revised: 09/27/2016] [Accepted: 10/11/2016] [Indexed: 11/08/2022] Open
Abstract
A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo‐continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF‐based tonotopy and found a good agreement with BOLD signal‐based tonotopy, despite the lower contrast‐to‐noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high‐low‐high frequency gradients, co‐located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD‐based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus‐induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp 38:1140–1154, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Anna Gardumi
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Dimo Ivanov
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Martin Havlicek
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Kâmil Uludağ
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
43
|
Velocity Selective Networks in Human Cortex Reveal Two Functionally Distinct Auditory Motion Systems. PLoS One 2016; 11:e0157131. [PMID: 27294673 PMCID: PMC4905637 DOI: 10.1371/journal.pone.0157131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Accepted: 05/25/2016] [Indexed: 12/02/2022] Open
Abstract
The auditory system encounters motion cues through an acoustic object’s movement or rotation of the listener’s head in a stationary sound field, generating a wide range of naturally occurring velocities from a few to several hundred degrees per second. The angular velocity of moving acoustic objects relative to a listener is typically slow and does not exceed tens of degrees per second, whereas head rotations in a stationary acoustic field may generate fast-changing spatial cues in the order of several hundred degrees per second. We hypothesized that these two types of systems (i.e., encoding slow movements of an object or fast head rotations) may engage functionally distinct substrates in processing spatially dynamic auditory cues, with the latter potentially involved in maintaining perceptual constancy in a stationary field during head rotations and therefore possibly involving corollary-discharge mechanisms in premotor cortex. Using fMRI, we examined cortical response patterns to sound sources moving at a wide range of velocities in 3D virtual auditory space. We found a significant categorical difference between fast and slow moving sounds, with stronger activations in response to higher velocities in the posterior superior temporal regions, the planum temporale, and notably the premotor ventral-rostral (PMVr) area implicated in planning neck and head motor functions.
Collapse
|
44
|
Abstract
UNLABELLED Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior-posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or "periodotopy," are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale "periodotopic" organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds.
Collapse
|
45
|
Norman-Haignere S, Kanwisher NG, McDermott JH. Distinct Cortical Pathways for Music and Speech Revealed by Hypothesis-Free Voxel Decomposition. Neuron 2016; 88:1281-1296. [PMID: 26687225 DOI: 10.1016/j.neuron.2015.11.035] [Citation(s) in RCA: 181] [Impact Index Per Article: 22.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 10/03/2015] [Accepted: 11/23/2015] [Indexed: 11/19/2022]
Abstract
The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.
Collapse
Affiliation(s)
| | - Nancy G Kanwisher
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Science, MIT
| | | |
Collapse
|
46
|
Abstract
One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.
Collapse
Affiliation(s)
- Alyssa A Brewer
- Department of Cognitive Sciences and Center for Hearing Research, University of California, Irvine, California 92697; ,
| | - Brian Barton
- Department of Cognitive Sciences and Center for Hearing Research, University of California, Irvine, California 92697; ,
| |
Collapse
|
47
|
Häkkinen S, Ovaska N, Rinne T. Processing of pitch and location in human auditory cortex during visual and auditory tasks. Front Psychol 2015; 6:1678. [PMID: 26594185 PMCID: PMC4635202 DOI: 10.3389/fpsyg.2015.01678] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Accepted: 10/19/2015] [Indexed: 01/22/2023] Open
Abstract
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Collapse
Affiliation(s)
- Suvi Häkkinen
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Noora Ovaska
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Advanced Magnetic Imaging Centre, Aalto University School of Science Espoo, Finland
| |
Collapse
|
48
|
Gao PP, Zhang JW, Fan SJ, Sanes DH, Wu EX. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study. Neuroimage 2015; 123:22-32. [PMID: 26306991 DOI: 10.1016/j.neuroimage.2015.08.040] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 08/15/2015] [Accepted: 08/18/2015] [Indexed: 11/19/2022] Open
Abstract
The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical gain modulation is mediated primarily through direct projections and they point to future investigations of the differential roles of the direct and indirect projections in corticofugal modulation. In summary, our imaging findings demonstrate the large-scale descending influences, from both the auditory and visual cortices, on sound processing in different IC subdivisions. They can guide future studies on the coordinated activity across multiple regions of the auditory network, and its dysfunctions.
Collapse
Affiliation(s)
- Patrick P Gao
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Jevin W Zhang
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Shu-Juan Fan
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, United States
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Anatomy, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
49
|
Hickok G, Farahbod H, Saberi K. The Rhythm of Perception: Entrainment to Acoustic Rhythms Induces Subsequent Perceptual Oscillation. Psychol Sci 2015; 26:1006-13. [PMID: 25968248 DOI: 10.1177/0956797615576533] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Accepted: 02/17/2015] [Indexed: 11/15/2022] Open
Abstract
Acoustic rhythms are pervasive in speech, music, and environmental sounds. Recent evidence for neural codes representing periodic information suggests that they may be a neural basis for the ability to detect rhythm. Further, rhythmic information has been found to modulate auditory-system excitability, which provides a potential mechanism for parsing the acoustic stream. Here, we explored the effects of a rhythmic stimulus on subsequent auditory perception. We found that a low-frequency (3 Hz), amplitude-modulated signal induces a subsequent oscillation of the perceptual detectability of a brief nonperiodic acoustic stimulus (1-kHz tone); the frequency but not the phase of the perceptual oscillation matches the entrained stimulus-driven rhythmic oscillation. This provides evidence that rhythmic contexts have a direct influence on subsequent auditory perception of discrete acoustic events. Rhythm coding is likely a fundamental feature of auditory-system design that predates the development of explicit human enjoyment of rhythm in music or poetry.
Collapse
Affiliation(s)
- Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine
| | - Haleh Farahbod
- Department of Cognitive Sciences, University of California, Irvine
| | - Kourosh Saberi
- Department of Cognitive Sciences, University of California, Irvine
| |
Collapse
|
50
|
Gao PP, Zhang JW, Chan RW, Leong ATL, Wu EX. BOLD fMRI study of ultrahigh frequency encoding in the inferior colliculus. Neuroimage 2015; 114:427-37. [PMID: 25869860 DOI: 10.1016/j.neuroimage.2015.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 03/02/2015] [Accepted: 04/02/2015] [Indexed: 01/23/2023] Open
Abstract
Many vertebrates communicate with ultrahigh frequency (UHF) vocalizations to limit auditory detection by predators. The mechanisms underlying the neural encoding of such UHF sounds may provide important insights for understanding neural processing of other complex sounds (e.g. human speeches). In the auditory system, sound frequency is normally encoded topographically as tonotopy, which, however, contains very limited representation of UHFs in many species. Instead, electrophysiological studies suggested that two neural mechanisms, both exploiting the interactions between frequencies, may contribute to UHF processing. Neurons can exhibit excitatory or inhibitory responses to a tone when another UHF tone is presented simultaneously (combination sensitivity). They can also respond to such stimulation if they are tuned to the frequency of the cochlear-generated distortion products of the two tones, e.g. their difference frequency (cochlear distortion). Both mechanisms are present in an early station of the auditory pathway, the midbrain inferior colliculus (IC). Currently, it is unclear how prevalent the two mechanisms are and how they are functionally integrated in encoding UHFs. This study investigated these issues with large-view BOLD fMRI in rat auditory system, particularly the IC. UHF vocalizations (above 40kHz), but not pure tones at similar frequencies (45, 55, 65, 75kHz), evoked robust BOLD responses in multiple auditory nuclei, including the IC, reinforcing the sensitivity of the auditory system to UHFs despite limited representation in tonotopy. Furthermore, BOLD responses were detected in the IC when a pair of UHF pure tones was presented simultaneously (45 & 55kHz, 55 & 65kHz, 45 & 65kHz, 45 & 75kHz). For all four pairs, a cluster of voxels in the ventromedial side always showed the strongest responses, displaying combination sensitivity. Meanwhile, voxels in the dorsolateral side that showed strongest secondary responses to each pair of UHF pure tones also showed the strongest responses to a pure tone at their difference frequency, suggesting that they are sensitive to cochlear distortion. These BOLD fMRI results indicated that combination sensitivity and cochlear distortion are employed by large but spatially distinctive neuron populations in the IC to represent UHFs. Our imaging findings provided insights for understanding sound feature encoding in the early stage of the auditory pathway.
Collapse
Affiliation(s)
- Patrick P Gao
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Jevin W Zhang
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Russell W Chan
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Alex T L Leong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Anatomy, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|