51
|
Responses in area hMT+ reflect tuning for both auditory frequency and motion after blindness early in life. Proc Natl Acad Sci U S A 2019; 116:10081-10086. [PMID: 31036666 DOI: 10.1073/pnas.1815376116] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Previous studies report that human middle temporal complex (hMT+) is sensitive to auditory motion in early-blind individuals. Here, we show that hMT+ also develops selectivity for auditory frequency after early blindness, and that this selectivity is maintained after sight recovery in adulthood. Frequency selectivity was assessed using both moving band-pass and stationary pure-tone stimuli. As expected, within primary auditory cortex, both moving and stationary stimuli successfully elicited frequency-selective responses, organized in a tonotopic map, for all subjects. In early-blind and sight-recovery subjects, we saw evidence for frequency selectivity within hMT+ for the auditory stimulus that contained motion. We did not find frequency-tuned responses within hMT+ when using the stationary stimulus in either early-blind or sight-recovery subjects. We saw no evidence for auditory frequency selectivity in hMT+ in sighted subjects using either stimulus. Thus, after early blindness, hMT+ can exhibit selectivity for auditory frequency. Remarkably, this auditory frequency tuning persists in two adult sight-recovery subjects, showing that, in these subjects, auditory frequency-tuned responses can coexist with visually driven responses in hMT+.
Collapse
|
52
|
Meier J, Nolte G, Schneider TR, Engel AK, Leicht G, Mulert C. Intrinsic 40Hz-phase asymmetries predict tACS effects during conscious auditory perception. PLoS One 2019; 14:e0213996. [PMID: 30943251 PMCID: PMC6447177 DOI: 10.1371/journal.pone.0213996] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Accepted: 03/05/2019] [Indexed: 12/31/2022] Open
Abstract
Synchronized oscillatory gamma-band activity (30-100Hz) has been suggested to constitute a key mechanism to dynamically orchestrate sensory information integration across multiple spatio-temporal scales. We here tested whether interhemispheric functional connectivity and ensuing auditory perception can selectively be modulated by high-density transcranial alternating current stimulation (HD-tACS). For this purpose, we applied multi-site HD-tACS at 40Hz bilaterally with a phase lag of 180° and recorded a 64-channel EEG to study the oscillatory phase dynamics at the source-space level during a dichotic listening (DL) task in twenty-six healthy participants. In this study, we revealed an oscillatory phase signature at 40Hz which reflects different temporal profiles of the phase asymmetries during left and right ear percept. Here we report that 180°-tACS did not affect the right ear advantage during DL at group level. However, a follow-up analysis revealed that the intrinsic phase asymmetries during sham-tACS determined the directionality of the behavioral modulations: While a shift to left ear percept was associated with augmented interhemispheric asymmetry (closer to 180°), a shift to right ear processing was elicited in subjects with lower asymmetry (closer to 0°). Crucially, the modulation of the interhemispheric network dynamics depended on the deviation of the tACS-induced phase-lag from the intrinsic phase asymmetry. Our characterization of the oscillatory network trends is giving rise to the importance of phase-specific gamma-band coupling during ambiguous auditory perception, and emphasizes the necessity to address the inter-individual variability of phase asymmetries in future studies by tailored stimulation protocols.
Collapse
Affiliation(s)
- Jan Meier
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- * E-mail:
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Till R. Schneider
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andreas K. Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gregor Leicht
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Christoph Mulert
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Centre for Psychiatry and Psychotherapy, Justus-Liebig-University Giessen, Giessen, Germany
| |
Collapse
|
53
|
Larger Auditory Cortical Area and Broader Frequency Tuning Underlie Absolute Pitch. J Neurosci 2019; 39:2930-2937. [PMID: 30745420 DOI: 10.1523/jneurosci.1532-18.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 01/08/2019] [Accepted: 01/12/2019] [Indexed: 12/29/2022] Open
Abstract
Absolute pitch (AP), the ability of some musicians to precisely identify and name musical tones in isolation, is associated with a number of gross morphological changes in the brain, but the fundamental neural mechanisms underlying this ability have not been clear. We presented a series of logarithmic frequency sweeps to age- and sex-matched groups of musicians with or without AP and controls without musical training. We used fMRI and population receptive field (pRF) modeling to measure the responses in the auditory cortex in 61 human subjects. The tuning response of each fMRI voxel was characterized as Gaussian, with independent center frequency and bandwidth parameters. We identified three distinct tonotopic maps, corresponding to primary (A1), rostral (R), and rostral-temporal (RT) regions of auditory cortex. We initially hypothesized that AP abilities might manifest in sharper tuning in the auditory cortex. However, we observed that AP subjects had larger cortical area, with the increased area primarily devoted to broader frequency tuning. We observed anatomically that A1, R and RT were significantly larger in AP musicians than in non-AP musicians or control subjects, which did not differ significantly from each other. The increased cortical area in AP in areas A1 and R were primarily low frequency and broadly tuned, whereas the distribution of responses in area RT did not differ significantly. We conclude that AP abilities are associated with increased early auditory cortical area devoted to broad-frequency tuning and likely exploit increased ensemble encoding.SIGNIFICANCE STATEMENT Absolute pitch (AP), the ability of some musicians to precisely identify and name musical tones in isolation, is associated with a number of gross morphological changes in the brain, but the fundamental neural mechanisms have not been clear. Our study shows that AP musicians have significantly larger volume in early auditory cortex than non-AP musicians and non-musician controls and that this increased volume is primarily devoted to broad-frequency tuning. We conclude that AP musicians are likely able to exploit increased ensemble representations to encode and identify frequency.
Collapse
|
54
|
Reduced Structural Connectivity Between Left Auditory Thalamus and the Motion-Sensitive Planum Temporale in Developmental Dyslexia. J Neurosci 2019; 39:1720-1732. [PMID: 30643025 DOI: 10.1523/jneurosci.1435-18.2018] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 11/02/2018] [Accepted: 11/25/2018] [Indexed: 02/07/2023] Open
Abstract
Developmental dyslexia is characterized by the inability to acquire typical reading and writing skills. Dyslexia has been frequently linked to cerebral cortex alterations; however, recent evidence also points toward sensory thalamus dysfunctions: dyslexics showed reduced responses in the left auditory thalamus (medial geniculate body, MGB) during speech processing in contrast to neurotypical readers. In addition, in the visual modality, dyslexics have reduced structural connectivity between the left visual thalamus (lateral geniculate nucleus, LGN) and V5/MT, a cerebral cortex region involved in visual movement processing. Higher LGN-V5/MT connectivity in dyslexics was associated with the faster rapid naming of letters and numbers (RANln), a measure that is highly correlated with reading proficiency. Here, we tested two hypotheses that were directly derived from these previous findings. First, we tested the hypothesis that dyslexics have reduced structural connectivity between the left MGB and the auditory-motion-sensitive part of the left planum temporale (mPT). Second, we hypothesized that the amount of left mPT-MGB connectivity correlates with dyslexics RANln scores. Using diffusion tensor imaging-based probabilistic tracking, we show that male adults with developmental dyslexia have reduced structural connectivity between the left MGB and the left mPT, confirming the first hypothesis. Stronger left mPT-MGB connectivity was not associated with faster RANln scores in dyslexics, but was in neurotypical readers. Our findings provide the first evidence that reduced cortico-thalamic connectivity in the auditory modality is a feature of developmental dyslexia and it may also affect reading-related cognitive abilities in neurotypical readers.SIGNIFICANCE STATEMENT Developmental dyslexia is one of the most widespread learning disabilities. Although previous neuroimaging research mainly focused on pathomechanisms of dyslexia at the cerebral cortex level, several lines of evidence suggest an atypical functioning of subcortical sensory structures. By means of diffusion tensor imaging, we here show that dyslexic male adults have reduced white matter connectivity in a cortico-thalamic auditory pathway between the left auditory motion-sensitive planum temporale and the left medial geniculate body. Connectivity strength of this pathway was associated with measures of reading fluency in neurotypical readers. This is novel evidence on the neurocognitive correlates of reading proficiency, highlighting the importance of cortico-subcortical interactions between regions involved in the processing of spectrotemporally complex sound.
Collapse
|
55
|
Zoellner S, Benner J, Zeidler B, Seither-Preisler A, Christiner M, Seitz A, Goebel R, Heinecke A, Wengenroth M, Blatow M, Schneider P. Reduced cortical thickness in Heschl's gyrus as an in vivo marker for human primary auditory cortex. Hum Brain Mapp 2018; 40:1139-1154. [PMID: 30367737 DOI: 10.1002/hbm.24434] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 10/04/2018] [Accepted: 10/07/2018] [Indexed: 12/28/2022] Open
Abstract
The primary auditory cortex (PAC) is located in the region of Heschl's gyrus (HG), as confirmed by histological, cytoarchitectonical, and neurofunctional studies. Applying cortical thickness (CTH) analysis based on high-resolution magnetic resonance imaging (MRI) and magnetoencephalography (MEG) in 60 primary school children and 60 adults, we investigated the CTH distribution of left and right auditory cortex (AC) and primary auditory source activity at the group and individual level. Both groups showed contoured regions of reduced auditory cortex (redAC) along the mediolateral extension of HG, illustrating large inter-individual variability with respect to shape, localization, and lateralization. In the right hemisphere, redAC localized more within the medial portion of HG, extending typically across HG duplications. In the left hemisphere, redAC was distributed significantly more laterally, reaching toward the anterolateral portion of HG. In both hemispheres, redAC was found to be significantly thinner (mean CTH of 2.34 mm) as compared to surrounding areas (2.99 mm). This effect was more dominant in the right hemisphere rather than in the left one. Moreover, localization of the primary component of auditory evoked activity (P1), as measured by MEG in response to complex harmonic sounds, strictly co-localized with redAC. This structure-function link was found consistently at the group and individual level, suggesting PAC to be represented by areas of reduced cortex in HG. Thus, we propose reduced CTH as an in vivo marker for identifying shape and localization of PAC in the individual brain.
Collapse
Affiliation(s)
- Simeon Zoellner
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg, Germany.,Department of Neuroradiology, University of Heidelberg Medical School, Heidelberg, Germany
| | - Jan Benner
- Department of Neuroradiology, University of Heidelberg Medical School, Heidelberg, Germany
| | - Bettina Zeidler
- Department of Neuroradiology, University of Heidelberg Medical School, Heidelberg, Germany.,Institute of Systematic Musicology, University of Hamburg, Hamburg, Germany
| | | | - Markus Christiner
- Department of Linguistics, Unit for Language Learning and Teaching Research, University of Vienna, Vienna, Austria
| | - Angelika Seitz
- Department of Phoniatrics and Pedaudiology, University of Heidelberg Medical School, Heidelberg, Germany
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology, Universiteit Maastricht, Maastricht, The Netherlands
| | - Armin Heinecke
- Department of Cognitive Neuroscience, Faculty of Psychology, Universiteit Maastricht, Maastricht, The Netherlands
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Maria Blatow
- Department of Neuroradiology and Clinical Neuroscience Center, University Hospital Zürich, University of Zürich, Zürich, Switzerland
| | - Peter Schneider
- Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg, Germany.,Department of Neuroradiology, University of Heidelberg Medical School, Heidelberg, Germany
| |
Collapse
|
56
|
Kim HC, Bandettini PA, Lee JH. Deep neural network predicts emotional responses of the human brain from functional magnetic resonance imaging. Neuroimage 2018; 186:607-627. [PMID: 30366076 DOI: 10.1016/j.neuroimage.2018.10.054] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 08/15/2018] [Accepted: 10/21/2018] [Indexed: 10/28/2022] Open
Abstract
An artificial neural network with multiple hidden layers (known as a deep neural network, or DNN) was employed as a predictive model (DNNp) for the first time to predict emotional responses using whole-brain functional magnetic resonance imaging (fMRI) data from individual subjects. During fMRI data acquisition, 10 healthy participants listened to 80 International Affective Digital Sound stimuli and rated their own emotions generated by each sound stimulus in terms of the arousal, dominance, and valence dimensions. The whole-brain spatial patterns from a general linear model (i.e., beta-valued maps) for each sound stimulus and the emotional response ratings were used as the input and output for the DNNP, respectively. Based on a nested five-fold cross-validation scheme, the paired input and output data were divided into training (three-fold), validation (one-fold), and test (one-fold) data. The DNNP was trained and optimized using the training and validation data and was tested using the test data. The Pearson's correlation coefficients between the rated and predicted emotional responses from our DNNP model with weight sparsity optimization (mean ± standard error 0.52 ± 0.02 for arousal, 0.51 ± 0.03 for dominance, and 0.51 ± 0.03 for valence, with an input denoising level of 0.3 and a mini-batch size of 1) were significantly greater than those of DNN models with conventional regularization schemes including elastic net regularization (0.15 ± 0.05, 0.15 ± 0.06, and 0.21 ± 0.04 for arousal, dominance, and valence, respectively), those of shallow models including logistic regression (0.11 ± 0.04, 0.10 ± 0.05, and 0.17 ± 0.04 for arousal, dominance, and valence, respectively; average of logistic regression and sparse logistic regression), and those of support vector machine-based predictive models (SVMps; 0.12 ± 0.06, 0.06 ± 0.06, and 0.10 ± 0.06 for arousal, dominance, and valence, respectively; average of linear and non-linear SVMps). This difference was confirmed to be significant with a Bonferroni-corrected p-value of less than 0.001 from a one-way analysis of variance (ANOVA) and subsequent paired t-test. The weights of the trained DNNPs were interpreted and input patterns that maximized or minimized the output of the DNNPs (i.e., the emotional responses) were estimated. Based on a binary classification of each emotion category (e.g., high arousal vs. low arousal), the error rates for the DNNP (31.2% ± 1.3% for arousal, 29.0% ± 1.7% for dominance, and 28.6% ± 3.0% for valence) were significantly lower than those for the linear SVMP (44.7% ± 2.0%, 50.7% ± 1.7%, and 47.4% ± 1.9% for arousal, dominance, and valence, respectively) and the non-linear SVMP (48.8% ± 2.3%, 52.2% ± 1.9%, and 46.4% ± 1.3% for arousal, dominance, and valence, respectively), as confirmed by the Bonferroni-corrected p < 0.001 from the one-way ANOVA. Our study demonstrates that the DNNp model is able to reveal neuronal circuitry associated with human emotional processing - including structures in the limbic and paralimbic areas, which include the amygdala, prefrontal areas, anterior cingulate cortex, insula, and caudate. Our DNNp model was also able to use activation patterns in these structures to predict and classify emotional responses to stimuli.
Collapse
Affiliation(s)
- Hyun-Chul Kim
- Department of Brain and Cognitive Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841, Republic of Korea
| | - Peter A Bandettini
- Section on Functional Imaging Methods, Lab of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Jong-Hwan Lee
- Department of Brain and Cognitive Engineering, Korea University, Anam-ro 145, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
57
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
58
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
59
|
Fisher JM, Dick FK, Levy DF, Wilson SM. Neural representation of vowel formants in tonotopic auditory cortex. Neuroimage 2018; 178:574-582. [PMID: 29860083 DOI: 10.1016/j.neuroimage.2018.05.072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/29/2018] [Accepted: 05/30/2018] [Indexed: 11/25/2022] Open
Abstract
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.
Collapse
Affiliation(s)
- Julia M Fisher
- Department of Linguistics, University of Arizona, Tucson, AZ, USA; Statistics Consulting Laboratory, BIO5 Institute, University of Arizona, Tucson, AZ, USA
| | - Frederic K Dick
- Department of Psychological Sciences, Birkbeck College, University of London, UK; Birkbeck-UCL Center for Neuroimaging, London, UK; Department of Experimental Psychology, University College London, UK
| | - Deborah F Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
60
|
Hamilton LS, Edwards E, Chang EF. A Spatial Map of Onset and Sustained Responses to Speech in the Human Superior Temporal Gyrus. Curr Biol 2018; 28:1860-1871.e4. [DOI: 10.1016/j.cub.2018.04.033] [Citation(s) in RCA: 98] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/04/2018] [Accepted: 04/10/2018] [Indexed: 01/05/2023]
|
61
|
Berger CC, Ehrsson HH. Mental Imagery Induces Cross-Modal Sensory Plasticity and Changes Future Auditory Perception. Psychol Sci 2018; 29:926-935. [DOI: 10.1177/0956797617748959] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Can what we imagine in our minds change how we perceive the world in the future? A continuous process of multisensory integration and recalibration is responsible for maintaining a correspondence between the senses (e.g., vision, touch, audition) and, ultimately, a stable and coherent perception of our environment. This process depends on the plasticity of our sensory systems. The so-called ventriloquism aftereffect—a shift in the perceived localization of sounds presented alone after repeated exposure to spatially mismatched auditory and visual stimuli—is a clear example of this type of plasticity in the audiovisual domain. In a series of six studies with 24 participants each, we investigated an imagery-induced ventriloquism aftereffect in which imagining a visual stimulus elicits the same frequency-specific auditory aftereffect as actually seeing one. These results demonstrate that mental imagery can recalibrate the senses and induce the same cross-modal sensory plasticity as real sensory stimuli.
Collapse
Affiliation(s)
- Christopher C. Berger
- Department of Neuroscience, Karolinska Institutet
- Division of Biology and Biological Engineering, California Institute of Technology
| | | |
Collapse
|
62
|
Oya H, Gander PE, Petkov CI, Adolphs R, Nourski KV, Kawasaki H, Howard MA, Griffiths TD. Neural phase locking predicts BOLD response in human auditory cortex. Neuroimage 2018; 169:286-301. [PMID: 29274745 PMCID: PMC6139034 DOI: 10.1016/j.neuroimage.2017.12.051] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Revised: 11/22/2017] [Accepted: 12/16/2017] [Indexed: 11/16/2022] Open
Abstract
Natural environments elicit both phase-locked and non-phase-locked neural responses to the stimulus in the brain. The interpretation of the BOLD signal to date has been based on an association of the non-phase-locked power of high-frequency local field potentials (LFPs), or the related spiking activity in single neurons or groups of neurons. Previous studies have not examined the prediction of the BOLD signal by phase-locked responses. We examined the relationship between the BOLD response and LFPs in the same nine human subjects from multiple corresponding points in the auditory cortex, using amplitude modulated pure tone stimuli of a duration to allow an analysis of phase locking of the sustained time period without contamination from the onset response. The results demonstrate that both phase locking at the modulation frequency and its harmonics, and the oscillatory power in gamma/high-gamma bands are required to predict the BOLD response. Biophysical models of BOLD signal generation in auditory cortex therefore require revision and the incorporation of both phase locking to rhythmic sensory stimuli and power changes in the ensemble neural activity.
Collapse
Affiliation(s)
- Hiroyuki Oya
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA.
| | - Phillip E Gander
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | | | - Ralph Adolphs
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kirill V Nourski
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Matthew A Howard
- Department of Neurosurgery, Human Brain Research Laboratory, University of Iowa, Iowa City, IA 52252, USA
| | - Timothy D Griffiths
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, UK
| |
Collapse
|
63
|
Riecke L, Peters JC, Valente G, Kemper VG, Formisano E, Sorger B. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Cereb Cortex 2018; 27:3002-3014. [PMID: 27230215 DOI: 10.1093/cercor/bhw160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands.,Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| |
Collapse
|
64
|
Hoefle S, Engel A, Basilio R, Alluri V, Toiviainen P, Cagy M, Moll J. Identifying musical pieces from fMRI data using encoding and decoding models. Sci Rep 2018; 8:2266. [PMID: 29396524 PMCID: PMC5797093 DOI: 10.1038/s41598-018-20732-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 01/11/2018] [Indexed: 12/04/2022] Open
Abstract
Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.
Collapse
Affiliation(s)
- Sebastian Hoefle
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil.,Biomedical Engineering Program, COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Annerose Engel
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil.,Day Clinic for Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Rodrigo Basilio
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil
| | - Vinoo Alluri
- Finnish Centre for Interdisciplinary Music Research, Department of Music, Art and Culture Studies, University of Jyväskylä, Jyväskylä, Finland.,International Institute of Information Technology, Gachibowli, Hyderabad, India
| | - Petri Toiviainen
- Finnish Centre for Interdisciplinary Music Research, Department of Music, Art and Culture Studies, University of Jyväskylä, Jyväskylä, Finland
| | - Maurício Cagy
- Biomedical Engineering Program, COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Jorge Moll
- Cognitive and Behavioral Neuroscience Unit and Neuroinformatics Workgroup, D'Or Institute for Research and Education (IDOR), Rio de Janeiro, Brazil.
| |
Collapse
|
65
|
Häkkinen S, Rinne T. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex. Brain Struct Funct 2018; 223:2113-2127. [DOI: 10.1007/s00429-018-1612-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Accepted: 01/14/2018] [Indexed: 12/29/2022]
|
66
|
Hu X, Guo L, Han J, Liu T. Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience. Brain Imaging Behav 2018; 11:253-263. [PMID: 26860834 DOI: 10.1007/s11682-016-9515-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Collapse
Affiliation(s)
- Xintao Hu
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Lei Guo
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi'an, China.
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| |
Collapse
|
67
|
Marques Abramov D, Saad T, Gomes-Junior SC, de Souza E Silva D, Araújo I, Lopes Moreira ME, Lazarev VV. Auditory brainstem function in microcephaly related to Zika virus infection. Neurology 2018; 90:e606-e614. [PMID: 29352094 DOI: 10.1212/wnl.0000000000004974] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2017] [Accepted: 11/06/2017] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To study the effect of prenatal Zika virus (ZV) infection on brainstem function reflected in brainstem auditory evoked potentials (BAEPs). METHODS In a cross-sectional study in 19 children (12 girls) with microcephaly related to ZV infection, aged between 12 and 62 weeks, the brainstem function was examined through BAEPs. The latencies of wave peaks I, III, and V of the left and right ears (n = 37) were standardized according to normative data, and compared between them by 2-tailed t test. The confounding variables (cephalic perimeter at the born and chronological age) were correlated with the normalized latencies using Pearson test. RESULTS All patients showed, in general, clear waveforms, with latencies within 3 SDs of the normative values. However, statistically increased latencies of waves I and III (I > III, p = 0.031) were observed, relative to wave V (p < 0.001), the latter being closer to respective normative value. The latency of wave I was observed to increase with age (r = 0.45, p = 0.005). The waves, in turn, did not depend on cephalic perimeter. CONCLUSIONS These results are consistent with the functional normality of the brainstem structure and its lack of correlation with microcephaly, suggesting that the disruption produced by the ZV infection does not act in the cell proliferation phase, but mostly in the processes of neuronal migration and differentiation in the telencephalon.
Collapse
Affiliation(s)
- Dimitri Marques Abramov
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil
| | - Tania Saad
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil
| | - Saint-Clair Gomes-Junior
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil
| | - Daniel de Souza E Silva
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil
| | - Izabel Araújo
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil
| | - Maria Elizabeth Lopes Moreira
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil
| | - Vladimir V Lazarev
- From the Laboratory of Neurobiology and Clinical Neurophysiology (D.M.A., T.S., D.d.S.e.S., I.A., V.V.L.) and Unit of Clinical Research (S.-C.G.-J., M.E.L.M.), National Institute of Women, Children and Adolescents, Health Fernandes Figueira, Oswaldo Cruz Foundation (FIOCRUZ), Ministry of Health, Rio de Janeiro, Brazil.
| |
Collapse
|
68
|
Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing Tone Sequences from Functional Magnetic Resonance Imaging Blood-Oxygen Level Dependent Responses within Human Primary Auditory Cortex. Front Psychol 2017; 8:1983. [PMID: 29184522 PMCID: PMC5694557 DOI: 10.3389/fpsyg.2017.01983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/30/2017] [Indexed: 01/12/2023] Open
Abstract
Here we show that, using functional magnetic resonance imaging (fMRI) blood-oxygen level dependent (BOLD) responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”). By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Collapse
Affiliation(s)
- Kelly H Chang
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Jessica M Thomas
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA, United States
| |
Collapse
|
69
|
Li Q, Liu G, Wei D, Guo J, Yuan G, Wu S. The spatiotemporal pattern of pure tone processing: A single-trial EEG-fMRI study. Neuroimage 2017; 187:184-191. [PMID: 29191479 DOI: 10.1016/j.neuroimage.2017.11.059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Revised: 11/23/2017] [Accepted: 11/26/2017] [Indexed: 12/12/2022] Open
Abstract
Although considerable research has been published on pure tone processing, its spatiotemporal pattern is not well understood. Specifically, the link between neural activity in the auditory pathway measured by functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) markers of pure tone processing in the P1, N1, P2, and N4 components is not well established. In this study, we used single-trial EEG-fMRI as a multi-modal fusion approach to integrate concurrently acquired EEG and fMRI data, in order to understand the spatial and temporal aspects of the pure tone processing pathway. Data were recorded from 33 subjects who were presented with stochastically alternating pure tone sequences with two different frequencies: 200 and 6400 Hz. Brain network correlated with trial-to-trial variability of the task-discriminating EEG amplitude was identified. We found that neural responses responding to pure tone perception are spatially along the auditory pathway and temporally divided into three stages: (1) the early stage (P1), wherein activation occurs in the midbrain, which constitutes a part of the low level auditory pathway; (2) the middle stage (N1, P2), wherein correlates were found in areas associated with the posterodorsal auditory pathway, including the primary auditory cortex and the motor cortex; (3) the late stage (N4), wherein correlation was found in the motor cortex. This indicates that trial-by-trial variation in neural activity in the P1, N1, P2, and N4 components reflects the sequential engagement of low- and high-level parts of the auditory pathway for pure tone processing. Our results demonstrate that during simple pure tone listening tasks, regions associated with the auditory pathway transiently correlate with trial-to-trial variability of the EEG amplitude, and they do so on a millisecond timescale with a distinct temporal ordering.
Collapse
Affiliation(s)
- Qiang Li
- College of Electronic and Information Engineering, Southwest University, No. 2, TianSheng Street, Beibei, Chongqing 400715, China
| | - Guangyuan Liu
- College of Electronic and Information Engineering, Southwest University, No. 2, TianSheng Street, Beibei, Chongqing 400715, China.
| | - Dongtao Wei
- Department of Psychology, Southwest University, No. 2, TianSheng Street, Beibei, Chongqing 400715, China
| | - Jing Guo
- College of Electronic and Information Engineering, Southwest University, No. 2, TianSheng Street, Beibei, Chongqing 400715, China
| | - Guangjie Yuan
- College of Electronic and Information Engineering, Southwest University, No. 2, TianSheng Street, Beibei, Chongqing 400715, China
| | - Shifu Wu
- College of Electronic and Information Engineering, Southwest University, No. 2, TianSheng Street, Beibei, Chongqing 400715, China
| |
Collapse
|
70
|
De Angelis V, De Martino F, Moerel M, Santoro R, Hausfeld L, Formisano E. Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds. Neuroimage 2017; 180:291-300. [PMID: 29146377 DOI: 10.1016/j.neuroimage.2017.11.020] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 10/20/2017] [Accepted: 11/11/2017] [Indexed: 11/30/2022] Open
Abstract
Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept.
Collapse
Affiliation(s)
- Vittoria De Angelis
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands; Center for Magnetic Resonance Research, University of Minnesota Medical School, 2021 Sixth Street SE, Minneapolis, MN 55455, United States
| | - Michelle Moerel
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Roberta Santoro
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Lars Hausfeld
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht University, Maastricht, The Netherlands; Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, The Netherlands.
| |
Collapse
|
71
|
Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture. J Neurosci 2017; 37:12187-12201. [PMID: 29109238 PMCID: PMC5729191 DOI: 10.1523/jneurosci.1436-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 10/04/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.
Collapse
|
72
|
Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss. Hear Res 2017; 355:81-96. [DOI: 10.1016/j.heares.2017.09.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 07/28/2017] [Accepted: 09/23/2017] [Indexed: 01/09/2023]
|
73
|
Griskova-Bulanova I, Dapsys K, Melynyte S, Voicikas A, Maciulis V, Andruskevicius S, Korostenskaja M. 40Hz auditory steady-state response in schizophrenia: Sensitivity to stimulation type (clicks versus flutter amplitude-modulated tones). Neurosci Lett 2017; 662:152-157. [PMID: 29051085 DOI: 10.1016/j.neulet.2017.10.025] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2017] [Revised: 10/12/2017] [Accepted: 10/16/2017] [Indexed: 01/22/2023]
Abstract
Auditory steady-state response (ASSR) at 40Hz has been proposed as a potential biomarker for schizophrenia. The ASSR studies in patients have used click stimulation or amplitude-modulated tones. However, the sensitivity of 40Hz ASSRs to different stimulation types in the same group of patients has not been previously evaluated. Two stimulation types for ASSRs were tested in this study: (1) 40Hz clicks and (2) flutter-amplitude modulated tones. The mean phase-locking index, evoked amplitude and event-related spectral perturbation values were compared between schizophrenia patients (n=26) and healthy controls (n=20). Both stimulation types resulted in the observation of impaired phase-locking and power measures of late (200-500ms) 40Hz ASSR in patients compared to healthy controls. The early-latency (0-100ms) 40Hz ASSR part was diminished in the schizophrenia group in response to clicks only. The late-latency 40Hz ASSR parameters obtained through different stimulation types correlated in healthy subjects but not in patients. We conclude that flutter amplitude-modulated tone stimulation, due to its potential to reveal late-latency entrainment deficits, is suitable for use in clinical populations. Careful consideration of experimental stimulation settings can contribute to the interpretation of ASSR deficits and utilization as a potential biomarker.
Collapse
Affiliation(s)
| | - Kastytis Dapsys
- Department of Electrophysiological Treatment and Investigation Methods, Vilnius Republican Psychiatric Hospital, Vilnius, Lithuania
| | - Sigita Melynyte
- Institute of Biosciences, Vilnius University, Vilnius, Lithuania
| | | | - Valentinas Maciulis
- Department of Electrophysiological Treatment and Investigation Methods, Vilnius Republican Psychiatric Hospital, Vilnius, Lithuania
| | - Sergejus Andruskevicius
- Department of Electrophysiological Treatment and Investigation Methods, Vilnius Republican Psychiatric Hospital, Vilnius, Lithuania
| | - Milena Korostenskaja
- Milena's Functional Brain Mapping and Brain Computer Interface Lab, Florida Hospital for Children, Orlando, FL, USA; MEG Lab, Florida Hospital for Children, Orlando, FL, USA; Department of Psychology, College of Arts and Sciences, University of North Florida, Jacksonville, FL, USA
| |
Collapse
|
74
|
Wingfield C, Su L, Liu X, Zhang C, Woodland P, Thwaites A, Fonteneau E, Marslen-Wilson WD. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem. PLoS Comput Biol 2017; 13:e1005617. [PMID: 28945744 PMCID: PMC5612454 DOI: 10.1371/journal.pcbi.1005617] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Accepted: 06/12/2017] [Indexed: 01/06/2023] Open
Abstract
There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental 'machine states', generated as the ASR analysis progresses over time, to the incremental 'brain states', measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain.
Collapse
Affiliation(s)
- Cai Wingfield
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- Department of Psychology, University of Lancaster, Lancaster, United Kingdom
- * E-mail: (CW); (LS)
| | - Li Su
- China–UK Centre for Cognition and Ageing Research, Faculty of Psychology, Southwest University, Chongqing, China
- Department of Psychiatry, University of Cambridge, Cambridge, United Kingdom
- * E-mail: (CW); (LS)
| | - Xunying Liu
- Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Chao Zhang
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Phil Woodland
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Andrew Thwaites
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | - Elisabeth Fonteneau
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | - William D. Marslen-Wilson
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| |
Collapse
|
75
|
Noh TS, Rah YC, Kyong JS, Kim JS, Park MK, Lee JH, Oh SH, Chung CK, Suh MW. Comparison of treatment outcomes between 10 and 20 EEG electrode location system-guided and neuronavigation-guided repetitive transcranial magnetic stimulation in chronic tinnitus patients and target localization in the Asian brain. Acta Otolaryngol 2017; 137:945-951. [PMID: 28471721 DOI: 10.1080/00016489.2017.1316870] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVE rTMS is a non-invasive method that applies a brief magnetic pulse to the cortex and is regarded as a possible therapeutic method for tinnitus control. However, it remains unclear whether the rTMS treatment effect would be the same in tinnitus patients receiving the 10-20 EEG-based target localization as in those receiving imaging-based neuronavigation target localization. METHODS We compared the treatment outcome of the 10-20 EEG-guided rTMS (Group 1) with that of the neuronavigation-guided rTMS (Group 2). Using the individual subject's MRI data and neuronavigation system, the coordinates of the AC relative to the 10-20 EEG system were identified in Asian and compared with those of Caucasian. RESULTS There was significant improvement in the THI and VAS scores in Group 1 and 2. However, there was no significant difference between the two groups. The location of the AC in Asians was significantly different to that in Caucasians. CONCLUSION The 10-20 EEG coordinates of the AC in Asians were significantly different to those in Caucasians. To accurately aim for the AC in Asians, it is recommended that the rTMS be located 1.8 cm superior to the T3 and 0.6 cm posterior to the T3-Cz line. However, because the spatial resolution of the TMS is rather low, this difference probably was not reflected in the treatment outcome.
Collapse
Affiliation(s)
- Tae-Soo Noh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Yoon-Chan Rah
- Department of Otorhinolaryngology, Korea University Ansan Hospital, Ansan, Republic of Korea
| | - Jeong Sug Kyong
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
- Medical Research Center, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - June Sic Kim
- Medical Research Center, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Republic of Korea
| | - Moo Kyun Park
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jun Ho Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Seung Ha Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Chun Kee Chung
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Republic of Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Myung-Whan Suh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| |
Collapse
|
76
|
San Juan J, Hu XS, Issa M, Bisconti S, Kovelman I, Kileny P, Basura G. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS). PLoS One 2017; 12:e0179150. [PMID: 28604786 PMCID: PMC5467838 DOI: 10.1371/journal.pone.0179150] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Accepted: 05/24/2017] [Indexed: 12/22/2022] Open
Abstract
Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom sound perception and potentially serve as an objective measure of central neural pathology.
Collapse
Affiliation(s)
- Juan San Juan
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Inst., The University of Michigan, 1100 W Medical Center Drive, Ann Arbor, MI, United States of America
| | - Xiao-Su Hu
- Center for Human Growth and Development, The University of Michigan, Ann Arbor, MI, United States of America
| | - Mohamad Issa
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Inst., The University of Michigan, 1100 W Medical Center Drive, Ann Arbor, MI, United States of America
| | - Silvia Bisconti
- Center for Human Growth and Development, The University of Michigan, Ann Arbor, MI, United States of America
| | - Ioulia Kovelman
- Center for Human Growth and Development, The University of Michigan, Ann Arbor, MI, United States of America
| | - Paul Kileny
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Inst., The University of Michigan, 1100 W Medical Center Drive, Ann Arbor, MI, United States of America
- Center for Human Growth and Development, The University of Michigan, Ann Arbor, MI, United States of America
| | - Gregory Basura
- Department of Otolaryngology/Head and Neck Surgery, Kresge Hearing Research Inst., The University of Michigan, 1100 W Medical Center Drive, Ann Arbor, MI, United States of America
- Center for Human Growth and Development, The University of Michigan, Ann Arbor, MI, United States of America
- * E-mail:
| |
Collapse
|
77
|
Nourski KV, Banks MI, Steinschneider M, Rhone AE, Kawasaki H, Mueller RN, Todd MM, Howard MA. Electrocorticographic delineation of human auditory cortical fields based on effects of propofol anesthesia. Neuroimage 2017; 152:78-93. [PMID: 28254512 PMCID: PMC5432407 DOI: 10.1016/j.neuroimage.2017.02.061] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Revised: 02/13/2017] [Accepted: 02/21/2017] [Indexed: 12/20/2022] Open
Abstract
The functional organization of human auditory cortex remains incompletely characterized. While the posteromedial two thirds of Heschl's gyrus (HG) is generally considered to be part of core auditory cortex, additional subdivisions of HG remain speculative. To further delineate the hierarchical organization of human auditory cortex, we investigated regional heterogeneity in the modulation of auditory cortical responses under varying depths of anesthesia induced by propofol. Non-invasive studies have shown that propofol differentially affects auditory cortical activity, with a greater impact on non-core areas. Subjects were neurosurgical patients undergoing removal of intracranial electrodes placed to identify epileptic foci. Stimuli were 50Hz click trains, presented continuously during an awake baseline period, and subsequently, while propofol infusion was incrementally titrated to induce general anesthesia. Electrocorticographic recordings were made with depth electrodes implanted in HG and subdural grid electrodes implanted over superior temporal gyrus (STG). Depth of anesthesia was monitored using spectral entropy. Averaged evoked potentials (AEPs), frequency-following responses (FFRs) and high gamma (70-150Hz) event-related band power were used to characterize auditory cortical activity. Based on the changes in AEPs and FFRs during the induction of anesthesia, posteromedial HG could be divided into two subdivisions. In the most posteromedial aspect of the gyrus, the earliest AEP deflections were preserved and FFRs increased during induction. In contrast, the remainder of the posteromedial HG exhibited attenuation of both the AEP and the FFR. The anterolateral HG exhibited weaker activation characterized by broad, low-voltage AEPs and the absence of FFRs. Lateral STG exhibited limited activation by click trains, and FFRs there diminished during induction. Sustained high gamma activity was attenuated in the most posteromedial portion of HG, and was absent in all other regions. These differential patterns of auditory cortical activity during the induction of anesthesia may serve as useful physiological markers for field delineation. In this study, the posteromedial HG could be parcellated into at least two subdivisions. Preservation of the earliest AEP deflections and FFRs in the posteromedial HG likely reflects the persistence of feedforward synaptic activity generated by inputs from subcortical auditory pathways, including the medial geniculate nucleus.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA.
| | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin - Madison, Madison, WI, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rashmi N Mueller
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA
| | - Michael M Todd
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA; Department of Anesthesiology, University of Minnesota, Minneapolis, MN, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
78
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
79
|
Moerel M, De Martino F, Kemper VG, Schmitter S, Vu AT, Uğurbil K, Formisano E, Yacoub E. Sensitivity and specificity considerations for fMRI encoding, decoding, and mapping of auditory cortex at ultra-high field. Neuroimage 2017; 164:18-31. [PMID: 28373123 DOI: 10.1016/j.neuroimage.2017.03.063] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Revised: 12/18/2016] [Accepted: 03/29/2017] [Indexed: 01/05/2023] Open
Abstract
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T2* weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T2* weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T2* weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked.
Collapse
Affiliation(s)
- Michelle Moerel
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Maastricht Centre for Systems Biology, Maastricht University, Maastricht, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands.
| | - Federico De Martino
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands.
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands.
| | - Sebastian Schmitter
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Department of Biomedical Magnetic Resonance, Physikalisch-Technische Bundesanstalt, Berlin, Germany.
| | - An T Vu
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA; Advanced MRI Technologies, Sebastopol, CA, USA.
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, The Netherlands.
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| |
Collapse
|
80
|
High-Resolution fMRI of Auditory Cortical Map Changes in Unilateral Hearing Loss and Tinnitus. Brain Topogr 2017; 30:685-697. [DOI: 10.1007/s10548-017-0547-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 01/18/2017] [Indexed: 12/19/2022]
|
81
|
Tonotopic representation of loudness in the human cortex. Hear Res 2016; 344:244-254. [PMID: 27915027 PMCID: PMC5256480 DOI: 10.1016/j.heares.2016.11.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2016] [Revised: 11/24/2016] [Accepted: 11/29/2016] [Indexed: 12/25/2022]
Abstract
A prominent feature of the auditory system is that neurons show tuning to audio frequency; each neuron has a characteristic frequency (CF) to which it is most sensitive. Furthermore, there is an orderly mapping of CF to position, which is called tonotopic organization and which is observed at many levels of the auditory system. In a previous study (Thwaites et al., 2016) we examined cortical entrainment to two auditory transforms predicted by a model of loudness, instantaneous loudness and short-term loudness, using speech as the input signal. The model is based on the assumption that neural activity is combined across CFs (i.e. across frequency channels) before the transform to short-term loudness. However, it is also possible that short-term loudness is determined on a channel-specific basis. Here we tested these possibilities by assessing neural entrainment to the overall and channel-specific instantaneous loudness and the overall and channel-specific short-term loudness. The results showed entrainment to channel-specific instantaneous loudness at latencies of 45 and 100 ms (bilaterally, in and around Heschl's gyrus). There was entrainment to overall instantaneous loudness at 165 ms in dorso-lateral sulcus (DLS). Entrainment to overall short-term loudness occurred primarily at 275 ms, bilaterally in DLS and superior temporal sulcus. There was only weak evidence for entrainment to channel-specific short-term loudness. The latency of cortical entrainment to various aspects of loudness was assessed. For channel-specific instantaneous loudness the latencies were 45 and 100 ms. For overall instantaneous loudness the latency was 165 ms. For overall short-term loudness the latency was 275 ms. Entrainment to channel-specific short-term loudness was weak.
Collapse
|
82
|
Guinchard AC, Ghazaleh N, Saenz M, Fornari E, Prior J, Maeder P, Adib S, Maire R. Study of tonotopic brain changes with functional MRI and FDG-PET in a patient with unilateral objective cochlear tinnitus. Hear Res 2016; 341:232-239. [DOI: 10.1016/j.heares.2016.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 05/11/2016] [Accepted: 09/07/2016] [Indexed: 01/30/2023]
|
83
|
Gardumi A, Ivanov D, Havlicek M, Formisano E, Uludağ K. Tonotopic maps in human auditory cortex using arterial spin labeling. Hum Brain Mapp 2016; 38:1140-1154. [PMID: 27790786 PMCID: PMC5324648 DOI: 10.1002/hbm.23444] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Revised: 09/27/2016] [Accepted: 10/11/2016] [Indexed: 11/08/2022] Open
Abstract
A tonotopic organization of the human auditory cortex (AC) has been reliably found by neuroimaging studies. However, a full characterization and parcellation of the AC is still lacking. In this study, we employed pseudo‐continuous arterial spin labeling (pCASL) to map tonotopy and voice selective regions using, for the first time, cerebral blood flow (CBF). We demonstrated the feasibility of CBF‐based tonotopy and found a good agreement with BOLD signal‐based tonotopy, despite the lower contrast‐to‐noise ratio of CBF. Quantitative perfusion mapping of baseline CBF showed a region of high perfusion centered on Heschl's gyrus and corresponding to the main high‐low‐high frequency gradients, co‐located to the presumed primary auditory core and suggesting baseline CBF as a novel marker for AC parcellation. Furthermore, susceptibility weighted imaging was employed to investigate the tissue specificity of CBF and BOLD signal and the possible venous bias of BOLD‐based tonotopy. For BOLD only active voxels, we found a higher percentage of vein contamination than for CBF only active voxels. Taken together, we demonstrated that both baseline and stimulus‐induced CBF is an alternative fMRI approach to the standard BOLD signal to study auditory processing and delineate the functional organization of the auditory cortex. Hum Brain Mapp 38:1140–1154, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Anna Gardumi
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Dimo Ivanov
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Martin Havlicek
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Kâmil Uludağ
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
84
|
Wallace MN, Cronin MJ, Bowtell RW, Scott IS, Palmer AR, Gowland PA. Histological Basis of Laminar MRI Patterns in High Resolution Images of Fixed Human Auditory Cortex. Front Neurosci 2016; 10:455. [PMID: 27774049 PMCID: PMC5054214 DOI: 10.3389/fnins.2016.00455] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2016] [Accepted: 09/21/2016] [Indexed: 12/26/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies of the auditory region of the temporal lobe would benefit from the availability of image contrast that allowed direct identification of the primary auditory cortex, as this region cannot be accurately located using gyral landmarks alone. Previous work has suggested that the primary area can be identified in magnetic resonance (MR) images because of its relatively high myelin content. However, MR images are also affected by the iron content of the tissue and in this study we sought to confirm that different MR image contrasts did correlate with the myelin content in the gray matter and were not primarily affected by iron content as is the case in the primary visual and somatosensory areas. By imaging blocks of fixed post-mortem cortex in a 7 T scanner and then sectioning them for histological staining we sought to assess the relative contribution of myelin and iron to the gray matter contrast in the auditory region. Evaluating the image contrast in T2*-weighted images and quantitative R2* maps showed a reasonably high correlation between the myelin density of the gray matter and the intensity of the MR images. The correlation with T1-weighted phase sensitive inversion recovery (PSIR) images was better than with the previous two image types, and there were clearly differentiated borders between adjacent cortical areas in these images. A significant amount of iron was present in the auditory region, but did not seem to contribute to the laminar pattern of the cortical gray matter in MR images. Similar levels of iron were present in the gray and white matter and although iron was present in fibers within the gray matter, these fibers were fairly uniformly distributed across the cortex. Thus, we conclude that T1- and T2*-weighted imaging sequences do demonstrate the relatively high myelin levels that are characteristic of the deep layers in primary auditory cortex and allow it and some of the surrounding areas to be reliably distinguished.
Collapse
Affiliation(s)
- Mark N Wallace
- Medical Research Council Institute of Hearing Research, University of Nottingham Nottingham, UK
| | - Matthew J Cronin
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham Nottingham, UK
| | - Richard W Bowtell
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham Nottingham, UK
| | - Ian S Scott
- Neuropathology Laboratory, Nottingham University Hospitals NHS Trust, Queen's Medical Centre Nottingham, UK
| | - Alan R Palmer
- Medical Research Council Institute of Hearing Research, University of Nottingham Nottingham, UK
| | - Penny A Gowland
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham Nottingham, UK
| |
Collapse
|
85
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
86
|
Lee GW, Zambetta F, Li X, Paolini AG. Utilising reinforcement learning to develop strategies for driving auditory neural implants. J Neural Eng 2016; 13:046027. [PMID: 27432803 DOI: 10.1088/1741-2560/13/4/046027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. APPROACH By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. MAIN RESULTS Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model's function. We show the ability to effectively learn stimulation patterns which mimic the cochlea's ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. SIGNIFICANCE These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.
Collapse
Affiliation(s)
- Geoffrey W Lee
- School of Computer Science and Information Technology, RMIT University, Melbourne 3000, Australia
| | | | | | | |
Collapse
|
87
|
Striem-Amit E, Almeida J, Belledonne M, Chen Q, Fang Y, Han Z, Caramazza A, Bi Y. Topographical functional connectivity patterns exist in the congenitally, prelingually deaf. Sci Rep 2016; 6:29375. [PMID: 27427158 PMCID: PMC4947901 DOI: 10.1038/srep29375] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 06/10/2016] [Indexed: 12/26/2022] Open
Abstract
Congenital deafness causes large changes in the auditory cortex structure and function, such that without early childhood cochlear-implant, profoundly deaf children do not develop intact, high-level, auditory functions. But how is auditory cortex organization affected by congenital, prelingual, and long standing deafness? Does the large-scale topographical organization of the auditory cortex develop in people deaf from birth? And is it retained despite cross-modal plasticity? We identified, using fMRI, topographic tonotopy-based functional connectivity (FC) structure in humans in the core auditory cortex, its extending tonotopic gradients in the belt and even beyond that. These regions show similar FC structure in the congenitally deaf throughout the auditory cortex, including in the language areas. The topographic FC pattern can be identified reliably in the vast majority of the deaf, at the single subject level, despite the absence of hearing-aid use and poor oral language skills. These findings suggest that large-scale tonotopic-based FC does not require sensory experience to develop, and is retained despite life-long auditory deprivation and cross-modal plasticity. Furthermore, as the topographic FC is retained to varying degrees among the deaf subjects, it may serve to predict the potential for auditory rehabilitation using cochlear implants in individual subjects.
Collapse
Affiliation(s)
- Ella Striem-Amit
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Jorge Almeida
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra 3001-802, Portugal.,Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra 3001-802, Portugal
| | - Mario Belledonne
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA
| | - Quanjing Chen
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yuxing Fang
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Zaizhu Han
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA.,Center for Mind/Brain Sciences, University of Trento, 38068, Rovereto, Italy
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning &IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
88
|
Abstract
UNLABELLED Functional and anatomical studies have clearly demonstrated that auditory cortex is populated by multiple subfields. However, functional characterization of those fields has been largely the domain of animal electrophysiology, limiting the extent to which human and animal research can inform each other. In this study, we used high-resolution functional magnetic resonance imaging to characterize human auditory cortical subfields using a variety of low-level acoustic features in the spectral and temporal domains. Specifically, we show that topographic gradients of frequency preference, or tonotopy, extend along two axes in human auditory cortex, thus reconciling historical accounts of a tonotopic axis oriented medial to lateral along Heschl's gyrus and more recent findings emphasizing tonotopic organization along the anterior-posterior axis. Contradictory findings regarding topographic organization according to temporal modulation rate in acoustic stimuli, or "periodotopy," are also addressed. Although isolated subregions show a preference for high rates of amplitude-modulated white noise (AMWN) in our data, large-scale "periodotopic" organization was not found. Organization by AM rate was correlated with dominant pitch percepts in AMWN in many regions. In short, our data expose early auditory cortex chiefly as a frequency analyzer, and spectral frequency, as imposed by the sensory receptor surface in the cochlea, seems to be the dominant feature governing large-scale topographic organization across human auditory cortex. SIGNIFICANCE STATEMENT In this study, we examine the nature of topographic organization in human auditory cortex with fMRI. Topographic organization by spectral frequency (tonotopy) extended in two directions: medial to lateral, consistent with early neuroimaging studies, and anterior to posterior, consistent with more recent reports. Large-scale organization by rates of temporal modulation (periodotopy) was correlated with confounding spectral content of amplitude-modulated white-noise stimuli. Together, our results suggest that the organization of human auditory cortex is driven primarily by its response to spectral acoustic features, and large-scale periodotopy spanning across multiple regions is not supported. This fundamental information regarding the functional organization of early auditory cortex will inform our growing understanding of speech perception and the processing of other complex sounds.
Collapse
|
89
|
Frühholz S, van der Zwaag W, Saenz M, Belin P, Schobert AK, Vuilleumier P, Grandjean D. Neural decoding of discriminative auditory object features depends on their socio-affective valence. Soc Cogn Affect Neurosci 2016; 11:1638-49. [PMID: 27217117 DOI: 10.1093/scan/nsw066] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 05/11/2016] [Indexed: 11/12/2022] Open
Abstract
Human voices consist of specific patterns of acoustic features that are considerably enhanced during affective vocalizations. These acoustic features are presumably used by listeners to accurately discriminate between acoustically or emotionally similar vocalizations. Here we used high-field 7T functional magnetic resonance imaging in human listeners together with a so-called experimental 'feature elimination approach' to investigate neural decoding of three important voice features of two affective valence categories (i.e. aggressive and joyful vocalizations). We found a valence-dependent sensitivity to vocal pitch (f0) dynamics and to spectral high-frequency cues already at the level of the auditory thalamus. Furthermore, pitch dynamics and harmonics-to-noise ratio (HNR) showed overlapping, but again valence-dependent sensitivity in tonotopic cortical fields during the neural decoding of aggressive and joyful vocalizations, respectively. For joyful vocalizations we also revealed sensitivity in the inferior frontal cortex (IFC) to the HNR and pitch dynamics. The data thus indicate that several auditory regions were sensitive to multiple, rather than single, discriminative voice features. Furthermore, some regions partly showed a valence-dependent hypersensitivity to certain features, such as pitch dynamic sensitivity in core auditory regions and in the IFC for aggressive vocalizations, and sensitivity to high-frequency cues in auditory belt and parabelt regions for joyful vocalizations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zurich, 8050 Zurich, Switzerland Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| | - Wietske van der Zwaag
- Center for Biomedical Imaging, Ecole Polytechnique Fédérale de Lausanne 1015 Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Department of Clinical Neurosciences, CHUV, 1011 Lausanne, Switzerland Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
| | - Pascal Belin
- Department of Psychology, University of Glasgow, Glasgow G12 8QQ, UK
| | - Anne-Kathrin Schobert
- Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Patrik Vuilleumier
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Laboratory for Neurology and Imaging of Cognition, Department of Neurology and Department Neuroscience, Medical School, University of Geneva, 1211 Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland Neuroscience of Emotion and Affective Dynamics Laboratory, Department of Psychology, University of Geneva, Geneva 1205, Switzerland
| |
Collapse
|
90
|
Abstract
One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.
Collapse
Affiliation(s)
- Alyssa A Brewer
- Department of Cognitive Sciences and Center for Hearing Research, University of California, Irvine, California 92697; ,
| | - Brian Barton
- Department of Cognitive Sciences and Center for Hearing Research, University of California, Irvine, California 92697; ,
| |
Collapse
|
91
|
Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS). Neural Plast 2016; 2016:7453149. [PMID: 27042360 PMCID: PMC4793139 DOI: 10.1155/2016/7453149] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2015] [Revised: 01/26/2016] [Accepted: 02/07/2016] [Indexed: 12/29/2022] Open
Abstract
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.
Collapse
|
92
|
Henriques J, Pazart L, Grigoryeva L, Muzard E, Beaussant Y, Haffen E, Moulin T, Aubry R, Ortega JP, Gabriel D. Bedside Evaluation of the Functional Organization of the Auditory Cortex in Patients with Disorders of Consciousness. PLoS One 2016; 11:e0146788. [PMID: 26789734 PMCID: PMC4720275 DOI: 10.1371/journal.pone.0146788] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 12/22/2015] [Indexed: 11/18/2022] Open
Abstract
To measure the level of residual cognitive function in patients with disorders of consciousness, the use of electrophysiological and neuroimaging protocols of increasing complexity is recommended. This work presents an EEG-based method capable of assessing at an individual level the integrity of the auditory cortex at the bedside of patients and can be seen as the first cortical stage of this hierarchical approach. The method is based on two features: first, the possibility of automatically detecting the presence of a N100 wave and second, in showing evidence of frequency processing in the auditory cortex with a machine learning based classification of the EEG signals associated with different frequencies and auditory stimulation modalities. In the control group of twelve healthy volunteers, cortical frequency processing was clearly demonstrated. EEG recordings from two patients with disorders of consciousness showed evidence of partially preserved cortical processing in the first patient and none in the second patient. From these results, it appears that the classification method presented here reliably detects signal differences in the encoding of frequencies and is a useful tool in the evaluation of the integrity of the auditory cortex. Even though the classification method presented in this work was designed for patients with disorders of consciousness, it can also be applied to other pathological populations.
Collapse
Affiliation(s)
- Julie Henriques
- Laboratoire de Mathématiques de Besançon, Besançon, France
- Cegos Deployment, Besançon, France
| | - Lionel Pazart
- INSERM CIC 1431 Centre d’Investigation Clinique en Innovation Technologique, CHU de Besançon, Besançon, France
- EA 481 Laboratoire de Neurosciences de Besançon, Besançon, France
| | | | - Emelyne Muzard
- Service de neurologie, CHU de Besançon, Besançon, France
| | - Yvan Beaussant
- Département douleur soins palliatifs, CHU de Besançon, Besançon, France
| | - Emmanuel Haffen
- INSERM CIC 1431 Centre d’Investigation Clinique en Innovation Technologique, CHU de Besançon, Besançon, France
- EA 481 Laboratoire de Neurosciences de Besançon, Besançon, France
- Service de Psychiatrie de l’adulte, CHU de Besançon, Besançon, France
| | - Thierry Moulin
- INSERM CIC 1431 Centre d’Investigation Clinique en Innovation Technologique, CHU de Besançon, Besançon, France
- EA 481 Laboratoire de Neurosciences de Besançon, Besançon, France
- Service de neurologie, CHU de Besançon, Besançon, France
| | - Régis Aubry
- INSERM CIC 1431 Centre d’Investigation Clinique en Innovation Technologique, CHU de Besançon, Besançon, France
- EA 481 Laboratoire de Neurosciences de Besançon, Besançon, France
- Département douleur soins palliatifs, CHU de Besançon, Besançon, France
| | - Juan-Pablo Ortega
- Laboratoire de Mathématiques de Besançon, Besançon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | - Damien Gabriel
- INSERM CIC 1431 Centre d’Investigation Clinique en Innovation Technologique, CHU de Besançon, Besançon, France
- EA 481 Laboratoire de Neurosciences de Besançon, Besançon, France
- * E-mail:
| |
Collapse
|
93
|
Rosburg T, Sörös P. The response decrease of auditory evoked potentials by repeated stimulation – Is there evidence for an interplay between habituation and sensitization? Clin Neurophysiol 2016; 127:397-408. [DOI: 10.1016/j.clinph.2015.04.071] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Revised: 04/21/2015] [Accepted: 04/25/2015] [Indexed: 11/30/2022]
|
94
|
Häkkinen S, Ovaska N, Rinne T. Processing of pitch and location in human auditory cortex during visual and auditory tasks. Front Psychol 2015; 6:1678. [PMID: 26594185 PMCID: PMC4635202 DOI: 10.3389/fpsyg.2015.01678] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Accepted: 10/19/2015] [Indexed: 01/22/2023] Open
Abstract
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Collapse
Affiliation(s)
- Suvi Häkkinen
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Noora Ovaska
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Advanced Magnetic Imaging Centre, Aalto University School of Science Espoo, Finland
| |
Collapse
|
95
|
High-field fMRI reveals tonotopically-organized and core auditory cortex in the cat. Hear Res 2015; 325:1-11. [DOI: 10.1016/j.heares.2015.03.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Revised: 01/26/2015] [Accepted: 03/05/2015] [Indexed: 01/12/2023]
|
96
|
Chouiter L, Tzovara A, Dieguez S, Annoni JM, Magezi D, De Lucia M, Spierer L. Experience-based Auditory Predictions Modulate Brain Activity to Silence as do Real Sounds. J Cogn Neurosci 2015; 27:1968-80. [PMID: 26042500 DOI: 10.1162/jocn_a_00835] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accounts of filling-in phenomenon. We investigated the neurophysiological effects of internal models by testing whether single-trial electrophysiological responses to omitted sounds in a rule-based sequence of tones with varying pitch could be decoded from the responses to real sounds and by analyzing the ERPs to the omissions with data-driven electrical neuroimaging methods. The decoding of the brain responses to different expected, but omitted, tones in both passive and active listening conditions was above chance based on the responses to the real sound in active listening conditions. Topographic ERP analyses and electrical source estimations revealed that, in the absence of any stimulation, experience-based internal models elicit an electrophysiological activity different from noise and that the temporal dynamics of this activity depend on attention. We further found that the expected change in pitch direction of omitted tones modulated the activity of left posterior temporal areas 140-200 msec after the onset of omissions. Collectively, our results indicate that, even in the absence of any stimulation, internal models modulate brain activity as do real sounds, indicating that auditory filling in can be accounted for by predictive activity.
Collapse
Affiliation(s)
| | - Athina Tzovara
- University of Lausanne.,University Hospital of Lausanne.,University of Zürich
| | | | | | | | | | | |
Collapse
|
97
|
Functional Connectivity in MRI Is Driven by Spontaneous BOLD Events. PLoS One 2015; 10:e0124577. [PMID: 25922945 PMCID: PMC4429612 DOI: 10.1371/journal.pone.0124577] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Accepted: 10/30/2014] [Indexed: 01/12/2023] Open
Abstract
Functional brain signals are frequently decomposed into a relatively small set of large scale, distributed cortical networks that are associated with different cognitive functions. It is generally assumed that the connectivity of these networks is static in time and constant over the whole network, although there is increasing evidence that this view is too simplistic. This work proposes novel techniques to investigate the contribution of spontaneous BOLD events to the temporal dynamics of functional connectivity as assessed by ultra-high field functional magnetic resonance imaging (fMRI). The results show that: 1) spontaneous events in recognised brain networks contribute significantly to network connectivity estimates; 2) these spontaneous events do not necessarily involve whole networks or nodes, but clusters of voxels which act in concert, forming transiently synchronising sub-networks and 3) a task can significantly alter the number of localised spontaneous events that are detected within a single network. These findings support the notion that spontaneous events are the main driver of the large scale networks that are commonly detected by seed-based correlation and ICA. Furthermore, we found that large scale networks are manifestations of smaller, transiently synchronising sub-networks acting dynamically in concert, corresponding to spontaneous events, and which do not necessarily involve all voxels within the network nodes oscillating in unison.
Collapse
|
98
|
Lau C, Zhang JW, McPherson B, Pienkowski M, Wu EX. Long-term, passive exposure to non-traumatic acoustic noise induces neural adaptation in the adult rat medial geniculate body and auditory cortex. Neuroimage 2015; 107:1-9. [DOI: 10.1016/j.neuroimage.2014.11.048] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2014] [Revised: 11/12/2014] [Accepted: 11/22/2014] [Indexed: 02/02/2023] Open
|
99
|
Thomas JM, Huber E, Stecker GC, Boynton GM, Saenz M, Fine I. Population receptive field estimates of human auditory cortex. Neuroimage 2015; 105:428-39. [PMID: 25449742 PMCID: PMC4262557 DOI: 10.1016/j.neuroimage.2014.10.060] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2013] [Revised: 10/15/2014] [Accepted: 10/28/2014] [Indexed: 01/07/2023] Open
Abstract
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).
Collapse
Affiliation(s)
- Jessica M Thomas
- Department of Psychology, University of Washington, Seattle WA 98195-1525, USA.
| | - Elizabeth Huber
- Department of Psychology, University of Washington, Seattle WA 98195-1525, USA
| | - G Christopher Stecker
- Department of Speech and Hearing Sciences, University of Washington, Seattle WA 98105, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville TN 37232, USA
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle WA 98195-1525, USA
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neurosciences, Lausanne University Hospital, 1011, Switzerland; Institute of Bioengineering, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015, Switzerland
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle WA 98195-1525, USA
| |
Collapse
|
100
|
|