1
|
Farahani ED, Wouters J, van Wieringen A. Age-related hearing loss is associated with alterations in temporal envelope processing in different neural generators along the auditory pathway. Front Neurol 2022; 13:905017. [PMID: 35989932 PMCID: PMC9389009 DOI: 10.3389/fneur.2022.905017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 07/11/2022] [Indexed: 11/22/2022] Open
Abstract
People with age-related hearing loss suffer from speech understanding difficulties, even after correcting for differences in hearing audibility. These problems are not only attributed to deficits in audibility but are also associated with changes in central temporal processing. The goal of this study is to obtain an understanding of potential alterations in temporal envelope processing for middle-aged and older persons with and without hearing impairment. The time series of activity of subcortical and cortical neural generators was reconstructed using a minimum-norm imaging technique. This novel technique allows for reconstructing a wide range of neural generators with minimal prior assumptions regarding the number and location of the generators. The results indicated that the response strength and phase coherence of middle-aged participants with hearing impairment (HI) were larger than for normal-hearing (NH) ones. In contrast, for the older participants, a significantly smaller response strength and phase coherence were observed in the participants with HI than the NH ones for most modulation frequencies. Hemispheric asymmetry in the response strength was also altered in middle-aged and older participants with hearing impairment and showed asymmetry toward the right hemisphere. Our brain source analyses show that age-related hearing loss is accompanied by changes in the temporal envelope processing, although the nature of these changes varies with age.
Collapse
|
2
|
MEG correlates of temporal regularity relevant to pitch perception in human auditory cortex. Neuroimage 2022; 249:118879. [PMID: 34999204 PMCID: PMC8883111 DOI: 10.1016/j.neuroimage.2022.118879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/01/2021] [Accepted: 01/05/2022] [Indexed: 11/20/2022] Open
Abstract
We recorded neural responses in human participants to three types of pitch-evoking regular stimuli at rates below and above the lower limit of pitch using magnetoencephalography (MEG). These bandpass filtered (1–4 kHz) stimuli were harmonic complex tones (HC), click trains (CT), and regular interval noise (RIN). Trials consisted of noise-regular-noise (NRN) or regular-noise-regular (RNR) segments in which the repetition rate (or fundamental frequency F0) was either above (250 Hz) or below (20 Hz) the lower limit of pitch. Neural activation was estimated and compared at the senor and source levels. The pitch-relevant regular stimuli (F0 = 250 Hz) were all associated with marked evoked responses at around 140 ms after noise-to-regular transitions at both sensor and source levels. In particular, greater evoked responses to pitch-relevant stimuli than pitch-irrelevant stimuli (F0 = 20 Hz) were localized along the Heschl's sulcus around 140 ms. The regularity-onset responses for RIN were much weaker than for the other types of regular stimuli (HC, CT). This effect was localized over planum temporale, planum polare, and lateral Heschl's gyrus. Importantly, the effect of pitch did not interact with the stimulus type. That is, we did not find evidence to support different responses for different types of regular stimuli from the spatiotemporal cluster of the pitch effect (∼140 ms). The current data demonstrate cortical sensitivity to temporal regularity relevant to pitch that is consistently present across different pitch-relevant stimuli in the Heschl's sulcus between Heschl's gyrus and planum temporale, both of which have been identified as a “pitch center” based on different modalities.
Collapse
|
3
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
4
|
Fuglsang SA, Madsen KH, Puonti O, Hjortkjær J, Siebner HR. Mapping cortico-subcortical sensitivity to 4 Hz amplitude modulation depth in human auditory system with functional MRI. Neuroimage 2021; 246:118745. [PMID: 34808364 DOI: 10.1016/j.neuroimage.2021.118745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 10/19/2022] Open
Abstract
Temporal modulations in the envelope of acoustic waveforms at rates around 4 Hz constitute a strong acoustic cue in speech and other natural sounds. It is often assumed that the ascending auditory pathway is increasingly sensitive to slow amplitude modulation (AM), but sensitivity to AM is typically considered separately for individual stages of the auditory system. Here, we used blood oxygen level dependent (BOLD) fMRI in twenty human subjects (10 male) to measure sensitivity of regional neural activity in the auditory system to 4 Hz temporal modulations. Participants were exposed to AM noise stimuli varying parametrically in modulation depth to characterize modulation-depth effects on BOLD responses. A Bayesian hierarchical modeling approach was used to model potentially nonlinear relations between AM depth and group-level BOLD responses in auditory regions of interest (ROIs). Sound stimulation activated the auditory brainstem and cortex structures in single subjects. BOLD responses to noise exposure in core and belt auditory cortices scaled positively with modulation depth. This finding was corroborated by whole-brain cluster-level inference. Sensitivity to AM depth variations was particularly pronounced in the Heschl's gyrus but also found in higher-order auditory cortical regions. None of the sound-responsive subcortical auditory structures showed a BOLD response profile that reflected the parametric variation in AM depth. The results are compatible with the notion that early auditory cortical regions play a key role in processing low-rate modulation content of sounds in the human auditory system.
Collapse
Affiliation(s)
- Søren A Fuglsang
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark.
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Department of Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
5
|
Khalighinejad B, Patel P, Herrero JL, Bickel S, Mehta AD, Mesgarani N. Functional characterization of human Heschl's gyrus in response to natural speech. Neuroimage 2021; 235:118003. [PMID: 33789135 PMCID: PMC8608271 DOI: 10.1016/j.neuroimage.2021.118003] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 03/23/2021] [Accepted: 03/25/2021] [Indexed: 01/11/2023] Open
Abstract
Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.
Collapse
Affiliation(s)
- Bahar Khalighinejad
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Prachi Patel
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Jose L. Herrero
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Ashesh D. Mehta
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States,Corresponding author at: Department of Electrical Engineering, Columbia University, New York, NY, United States. (B. Khalighinejad), (P. Patel), (J.L. Herrero), (S. Bickel), (A.D. Mehta), (N. Mesgarani)
| |
Collapse
|
6
|
Farahani ED, Wouters J, van Wieringen A. Brain mapping of auditory steady-state responses: A broad view of cortical and subcortical sources. Hum Brain Mapp 2021; 42:780-796. [PMID: 33166050 PMCID: PMC7814770 DOI: 10.1002/hbm.25262] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 10/13/2020] [Accepted: 10/15/2020] [Indexed: 12/21/2022] Open
Abstract
Auditory steady-state responses (ASSRs) are evoked brain responses to modulated or repetitive acoustic stimuli. Investigating the underlying neural generators of ASSRs is important to gain in-depth insight into the mechanisms of auditory temporal processing. The aim of this study is to reconstruct an extensive range of neural generators, that is, cortical and subcortical, as well as primary and non-primary ones. This extensive overview of neural generators provides an appropriate basis for studying functional connectivity. To this end, a minimum-norm imaging (MNI) technique is employed. We also present a novel extension to MNI which facilitates source analysis by quantifying the ASSR for each dipole. Results demonstrate that the proposed MNI approach is successful in reconstructing sources located both within (primary) and outside (non-primary) of the auditory cortex (AC). Primary sources are detected in different stimulation conditions (four modulation frequencies and two sides of stimulation), thereby demonstrating the robustness of the approach. This study is one of the first investigations to identify non-primary sources. Moreover, we show that the MNI approach is also capable of reconstructing the subcortical activities of ASSRs. Finally, the results obtained using the MNI approach outperform the group-independent component analysis method on the same data, in terms of detection of sources in the AC, reconstructing the subcortical activities and reducing computational load.
Collapse
Affiliation(s)
- Ehsan Darestani Farahani
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| | - Jan Wouters
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| | - Astrid van Wieringen
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| |
Collapse
|
7
|
Farahani ED, Wouters J, van Wieringen A. Neural Generators Underlying Temporal Envelope Processing Show Altered Responses and Hemispheric Asymmetry Across Age. Front Aging Neurosci 2020; 12:596551. [PMID: 33343335 PMCID: PMC7746817 DOI: 10.3389/fnagi.2020.596551] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/02/2020] [Indexed: 01/09/2023] Open
Abstract
Speech understanding problems are highly prevalent in the aging population, even when hearing sensitivity is clinically normal. These difficulties are attributed to changes in central temporal processing with age and can potentially be captured by age-related changes in neural generators. The aim of this study is to investigate age-related changes in a wide range of neural generators during temporal processing in middle-aged and older persons with normal audiometric thresholds. A minimum-norm imaging technique is employed to reconstruct cortical and subcortical neural generators of temporal processing for different acoustic modulations. The results indicate that for relatively slow modulations (<50 Hz), the response strength of neural sources is higher in older adults than in younger ones, while the phase-locking does not change. For faster modulations (80 Hz), both the response strength and the phase-locking of neural sources are reduced in older adults compared to younger ones. These age-related changes in temporal envelope processing of slow and fast acoustic modulations are possibly due to loss of functional inhibition, which is accompanied by aging. Both cortical (primary and non-primary) and subcortical neural generators demonstrate similar age-related changes in response strength and phase-locking. Hemispheric asymmetry is also altered in older adults compared to younger ones. Alterations depend on the modulation frequency and side of stimulation. The current findings at source level could have important implications for the understanding of age-related changes in auditory temporal processing and for developing advanced rehabilitation strategies to address speech understanding difficulties in the aging population.
Collapse
Affiliation(s)
- Ehsan Darestani Farahani
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Jan Wouters
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Astrid van Wieringen
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, Katholieke Universiteit Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Sohoglu E, Kumar S, Chait M, Griffiths TD. Multivoxel codes for representing and integrating acoustic features in human cortex. Neuroimage 2020; 217:116661. [PMID: 32081785 PMCID: PMC7339141 DOI: 10.1016/j.neuroimage.2020.116661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/13/2020] [Accepted: 02/15/2020] [Indexed: 10/25/2022] Open
Abstract
Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.
Collapse
Affiliation(s)
- Ediz Sohoglu
- School of Psychology, University of Sussex, Brighton, BN1 9QH, United Kingdom.
| | - Sukhbinder Kumar
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Maria Chait
- Ear Institute, University College London, London, United Kingdom
| | - Timothy D Griffiths
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| |
Collapse
|
9
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
10
|
Poeppel D, Assaneo MF. Speech rhythms and their neural foundations. Nat Rev Neurosci 2020; 21:322-334. [PMID: 32376899 DOI: 10.1038/s41583-020-0304-4] [Citation(s) in RCA: 159] [Impact Index Per Article: 39.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/30/2020] [Indexed: 12/26/2022]
Abstract
The recognition of spoken language has typically been studied by focusing on either words or their constituent elements (for example, low-level features or phonemes). More recently, the 'temporal mesoscale' of speech has been explored, specifically regularities in the envelope of the acoustic signal that correlate with syllabic information and that play a central role in production and perception processes. The temporal structure of speech at this scale is remarkably stable across languages, with a preferred range of rhythmicity of 2- 8 Hz. Importantly, this rhythmicity is required by the processes underlying the construction of intelligible speech. A lot of current work focuses on audio-motor interactions in speech, highlighting behavioural and neural evidence that demonstrates how properties of perceptual and motor systems, and their relation, can underlie the mesoscale speech rhythms. The data invite the hypothesis that the speech motor cortex is best modelled as a neural oscillator, a conjecture that aligns well with current proposals highlighting the fundamental role of neural oscillations in perception and cognition. The findings also show motor theories (of speech) in a different light, placing new mechanistic constraints on accounts of the action-perception interface.
Collapse
Affiliation(s)
- David Poeppel
- Department of Neuroscience, Max Planck Institute, Frankfurt, Germany. .,Department of Psychology, New York University, New York, NY, USA.
| | - M Florencia Assaneo
- Department of Psychology, New York University, New York, NY, USA.,Instituto de Neurobiologia, Universidad Nacional Autónoma de México Juriquilla, Querétaro, México
| |
Collapse
|
11
|
Kim SG, Poeppel D, Overath T. Modulation change detection in human auditory cortex: Evidence for asymmetric, non-linear edge detection. Eur J Neurosci 2020; 52:2889-2904. [PMID: 32080939 DOI: 10.1111/ejn.14707] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 01/18/2020] [Accepted: 02/10/2020] [Indexed: 11/28/2022]
Abstract
Changes in modulation rate are important cues for parsing acoustic signals, such as speech. We parametrically controlled modulation rate via the correlation coefficient (r) of amplitude spectra across fixed frequency channels between adjacent time frames: broadband modulation spectra are biased toward slow modulate rates with increasing r, and vice versa. By concatenating segments with different r, acoustic changes of various directions (e.g., changes from low to high correlation coefficients, that is, random-to-correlated or vice versa) and sizes (e.g., changes from low to high or from medium to high correlation coefficients) can be obtained. Participants listened to sound blocks and detected changes in correlation while MEG was recorded. Evoked responses to changes in correlation demonstrated (a) an asymmetric representation of change direction: random-to-correlated changes produced a prominent evoked field around 180 ms, while correlated-to-random changes evoked an earlier response with peaks at around 70 and 120 ms, whose topographies resemble those of the canonical P50m and N100m responses, respectively, and (b) a highly non-linear representation of correlation structure, whereby even small changes involving segments with a high correlation coefficient were much more salient than relatively large changes that did not involve segments with high correlation coefficients. Induced responses revealed phase tracking in the delta and theta frequency bands for the high correlation stimuli. The results confirm a high sensitivity for low modulation rates in human auditory cortex, both in terms of their representation and their segregation from other modulation rates.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA.,Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA.,Duke Institute for Brain Sciences, Duke University, Durham, NC, USA.,Center for Cognitive Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|
12
|
Yakunina N, Tae WS, Kim SS, Nam EC. Functional MRI evidence of the cortico-olivary efferent pathway during active auditory target processing in humans. Hear Res 2019; 379:1-11. [DOI: 10.1016/j.heares.2019.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 04/11/2019] [Accepted: 04/16/2019] [Indexed: 01/14/2023]
|
13
|
Yi HG, Leonard MK, Chang EF. The Encoding of Speech Sounds in the Superior Temporal Gyrus. Neuron 2019; 102:1096-1110. [PMID: 31220442 PMCID: PMC6602075 DOI: 10.1016/j.neuron.2019.04.023] [Citation(s) in RCA: 173] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 04/08/2019] [Accepted: 04/16/2019] [Indexed: 01/02/2023]
Abstract
The human superior temporal gyrus (STG) is critical for extracting meaningful linguistic features from speech input. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels and to dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues. Beyond speech features, STG representations are strongly modulated by learned knowledge and perceptual goals. Currently, a major challenge is to understand how these features are integrated across space and time in the brain during natural speech comprehension. We present a theory that temporally recurrent connections within STG generate context-dependent phonological representations, spanning longer temporal sequences relevant for coherent percepts of syllables, words, and phrases.
Collapse
Affiliation(s)
- Han Gyol Yi
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Matthew K Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
14
|
Flinker A, Doyle WK, Mehta AD, Devinsky O, Poeppel D. Spectrotemporal modulation provides a unifying framework for auditory cortical asymmetries. Nat Hum Behav 2019; 3:393-405. [PMID: 30971792 PMCID: PMC6650286 DOI: 10.1038/s41562-019-0548-z] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 01/28/2019] [Indexed: 11/29/2022]
Abstract
The principles underlying functional asymmetries in cortex remain debated. For example, it is accepted that speech is processed bilaterally in auditory cortex, but a left hemisphere dominance emerges when the input is interpreted linguistically. The mechanisms, however, are contested: what sound features or processing principles underlie laterality? Recent findings across species (humans, canines, bats) provide converging evidence that spectrotemporal sound features drive asymmetrical responses. Typically, accounts invoke models wherein the hemispheres differ in time-frequency resolution or integration window size. We develop a framework that builds on and unifies prevailing models, using spectrotemporal modulation space. Using signal processing techniques motivated by neural responses, we test this approach employing behavioral and neurophysiological measures. We show how psychophysical judgments align with spectrotemporal modulations and then characterize the neural sensitivities to temporal and spectral modulations. We demonstrate differential contributions from both hemispheres, with a left lateralization for temporal modulations and a weaker right lateralization for spectral modulations. We argue that representations in the modulation domain provide a more mechanistic basis to account for lateralization in auditory cortex.
Collapse
Affiliation(s)
- Adeen Flinker
- Department of Psychology, New York University, New York, NY, USA. .,Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Werner K Doyle
- Department of Neurosurgery, New York University School of Medicine, New York, NY, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Manhasset, NY, USA
| | - Orrin Devinsky
- Department of Neurology, New York University School of Medicine, New York, NY, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA.,Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
15
|
Farahani ED, Wouters J, van Wieringen A. Contributions of non-primary cortical sources to auditory temporal processing. Neuroimage 2019; 191:303-314. [PMID: 30794868 DOI: 10.1016/j.neuroimage.2019.02.037] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 12/21/2018] [Accepted: 02/14/2019] [Indexed: 01/18/2023] Open
Abstract
Temporal processing is essential for speech perception and directional hearing. However, the number and locations of cortical sources involved in auditory temporal processing are still a matter of debate. Using source reconstruction of human EEG responses, we show that, in addition to primary sources in the auditory cortices, sources outside the auditory cortex, designated as non-primary sources, are involved in auditory temporal processing. Non-primary sources within the left and right motor areas, the superior parietal lobe and the right occipital lobe were activated by amplitude-modulated stimuli, and were involved in the functional network. The robustness of these findings was checked for different stimulation conditions. The non-primary sources showed weaker phase-locking and lower activity than primary sources. These findings suggest that the non-primary sources belong to the non-primary auditory pathway. This pathway and non-primary sources detected in motor area explain how, in temporal prediction of upcoming stimuli and motor theory of speech perception, the motor area receives auditory inputs.
Collapse
Affiliation(s)
- Ehsan Darestani Farahani
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Belgium.
| | - Jan Wouters
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Belgium
| | - Astrid van Wieringen
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Belgium
| |
Collapse
|
16
|
Erb J, Armendariz M, De Martino F, Goebel R, Vanduffel W, Formisano E. Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex. Cereb Cortex 2018; 29:3636-3650. [DOI: 10.1093/cercor/bhy243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 08/08/2018] [Accepted: 09/05/2018] [Indexed: 01/01/2023] Open
Abstract
Abstract
Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.
Collapse
Affiliation(s)
- Julia Erb
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Wim Vanduffel
- Laboratorium voor Neuro-en Psychofysiologie, KU Leuven, Leuven, Belgium
- MGH Martinos Center, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Leuven Brain Institute, Leuven, Belgium
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Maastricht Center for Systems Biology (MaCSBio), MD Maastricht, The Netherlands
| |
Collapse
|
17
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
18
|
Santoro R, Moerel M, De Martino F, Valente G, Ugurbil K, Yacoub E, Formisano E. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns. Proc Natl Acad Sci U S A 2017; 114:4799-4804. [PMID: 28420788 PMCID: PMC5422795 DOI: 10.1073/pnas.1617622114] [Citation(s) in RCA: 59] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2-4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice).
Collapse
Affiliation(s)
- Roberta Santoro
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
- Brain and Language Laboratory, Department of Clinical Neuroscience, University Medical School, University of Geneva, CH-1211 Geneva, Switzerland
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455
- Maastricht Centre for Systems Biology, Maastricht University, 6200 MD Maastricht, The Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands;
- Maastricht Brain Imaging Center, 6200 MD Maastricht, The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University, 6200 MD Maastricht, The Netherlands
| |
Collapse
|
19
|
Human Superior Temporal Gyrus Organization of Spectrotemporal Modulation Tuning Derived from Speech Stimuli. J Neurosci 2016; 36:2014-26. [PMID: 26865624 DOI: 10.1523/jneurosci.1779-15.2016] [Citation(s) in RCA: 95] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED The human superior temporal gyrus (STG) is critical for speech perception, yet the organization of spectrotemporal processing of speech within the STG is not well understood. Here, to characterize the spatial organization of spectrotemporal processing of speech across human STG, we use high-density cortical surface field potential recordings while participants listened to natural continuous speech. While synthetic broad-band stimuli did not yield sustained activation of the STG, spectrotemporal receptive fields could be reconstructed from vigorous responses to speech stimuli. We find that the human STG displays a robust anterior-posterior spatial distribution of spectrotemporal tuning in which the posterior STG is tuned for temporally fast varying speech sounds that have relatively constant energy across the frequency axis (low spectral modulation) while the anterior STG is tuned for temporally slow varying speech sounds that have a high degree of spectral variation across the frequency axis (high spectral modulation). This work illustrates organization of spectrotemporal processing in the human STG, and illuminates processing of ethologically relevant speech signals in a region of the brain specialized for speech perception. SIGNIFICANCE STATEMENT Considerable evidence has implicated the human superior temporal gyrus (STG) in speech processing. However, the gross organization of spectrotemporal processing of speech within the STG is not well characterized. Here we use natural speech stimuli and advanced receptive field characterization methods to show that spectrotemporal features within speech are well organized along the posterior-to-anterior axis of the human STG. These findings demonstrate robust functional organization based on spectrotemporal modulation content, and illustrate that much of the encoded information in the STG represents the physical acoustic properties of speech stimuli.
Collapse
|
20
|
Functional magnetic resonance imaging confirms forward suppression for rapidly alternating sounds in human auditory cortex but not in the inferior colliculus. Hear Res 2016; 335:25-32. [PMID: 26899342 DOI: 10.1016/j.heares.2016.02.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Revised: 02/08/2016] [Accepted: 02/15/2016] [Indexed: 11/21/2022]
Abstract
Forward suppression at the level of the auditory cortex has been suggested to subserve auditory stream segregation. Recent results in non-streaming stimulation contexts have indicated that forward suppression can also be observed in the inferior colliculus; whether this holds for streaming-related contexts remains unclear. Here, we used cardiac-gated fMRI to examine forward suppression in the inferior colliculus (and the rest of the human auditory pathway) in response to canonical streaming stimuli (rapid tone sequences comprised of either one repetitive tone or two alternating tones). The first stimulus is typically perceived as a single stream, the second as two interleaved streams. In different experiments using either pure tones differing in frequency or bandpass-filtered noise differing in inter-aural time differences, we observed stronger auditory cortex activation in response to alternating vs. repetitive stimulation, consistent with the presence of forward suppression. In contrast, activity in the inferior colliculus and other subcortical nuclei did not significantly differ between alternating and monotonic stimuli. This finding could be explained by active amplification of forward suppression in auditory cortex, by a low rate (or absence) of cells showing forward suppression in inferior colliculus, or both.
Collapse
|
21
|
Overath T, McDermott JH, Zarate JM, Poeppel D. The cortical analysis of speech-specific temporal structure revealed by responses to sound quilts. Nat Neurosci 2015; 18:903-11. [PMID: 25984889 PMCID: PMC4769593 DOI: 10.1038/nn.4021] [Citation(s) in RCA: 133] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2015] [Accepted: 04/20/2015] [Indexed: 11/08/2022]
Abstract
Speech contains temporal structure that the brain must analyze to enable linguistic processing. To investigate the neural basis of this analysis, we used sound quilts, stimuli constructed by shuffling segments of a natural sound, approximately preserving its properties on short timescales while disrupting them on longer scales. We generated quilts from foreign speech to eliminate language cues and manipulated the extent of natural acoustic structure by varying the segment length. Using functional magnetic resonance imaging, we identified bilateral regions of the superior temporal sulcus (STS) whose responses varied with segment length. This effect was absent in primary auditory cortex and did not occur for quilts made from other natural sounds or acoustically matched synthetic sounds, suggesting tuning to speech-specific spectrotemporal structure. When examined parametrically, the STS response increased with segment length up to ∼500 ms. Our results identify a locus of speech analysis in human auditory cortex that is distinct from lexical, semantic or syntactic processes.
Collapse
Affiliation(s)
- Tobias Overath
- 1] Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, USA. [2] Department of Psychology, New York University, New York, New York, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, USA
| | - Jean Mary Zarate
- Department of Psychology, New York University, New York, New York, USA
| | - David Poeppel
- 1] Department of Psychology, New York University, New York, New York, USA. [2] Center for Neural Science, New York University, New York, New York, USA. [3] Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
22
|
Baumann S, Joly O, Rees A, Petkov CI, Sun L, Thiele A, Griffiths TD. The topography of frequency and time representation in primate auditory cortices. eLife 2015; 4. [PMID: 25590651 PMCID: PMC4398946 DOI: 10.7554/elife.03256] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Accepted: 01/14/2015] [Indexed: 11/13/2022] Open
Abstract
Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area.
Collapse
Affiliation(s)
- Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
23
|
Gutschalk A, Steinmann I. Stimulus dependence of contralateral dominance in human auditory cortex. Hum Brain Mapp 2014; 36:883-96. [PMID: 25346487 DOI: 10.1002/hbm.22673] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Revised: 10/13/2014] [Accepted: 10/15/2014] [Indexed: 11/11/2022] Open
Abstract
The auditory system is often considered to show little contralateral dominance but physiological reports on the contralateral dominance of activity evoked by monaural sound vary widely. Here, we show that part of this variation is stimulus-dependent: blood oxygen level dependent (BOLD) responses to 32 s of monaurally presented unmodulated noise (UN) showed activation in contralateral auditory cortex (AC) and deactivation in ipsilateral AC compared to nonstimulus baseline. Slow amplitude-modulated (AM) noise evoked strong contralateral activation and minimal ipsilateral activation. The contrast of AM-versus-UN was used to separate fMRI activity related to the slow amplitude modulation per se. This difference activation was bilateral although still stronger in contralateral AC. In magnetoencephalography (MEG), the response was dominated by the steady-state activity phase locked to the amplitude modulation. This MEG activity showed no consistent contralateral dominance across listeners. Subcortical BOLD activation was strongly contralateral subsequent to the superior olivary complex (SOC) and showed no significant difference between modulated and UN. An acallosal participant showed similar fMRI activation as the group, ruling transcallosal transmission an unlikely source of ipsilateral enhancement or ipsilateral deactivation. These results suggest that ascending activity subsequent to the SOC is strongly dominant contralateral to the stimulus ear. In contrast, the part of BOLD and MEG activity related to slow amplitude modulation is more bilateral and only observed in AC. Ipsilateral deactivation can potentially bias measures of contralateral BOLD dominance and should be considered in future studies.
Collapse
|
24
|
Peelle JE. Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 2014; 8:253. [PMID: 25191218 PMCID: PMC4139601 DOI: 10.3389/fnins.2014.00253] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 07/29/2014] [Indexed: 02/06/2023] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
25
|
Langers DRM. Assessment of tonotopically organised subdivisions in human auditory cortex using volumetric and surface-based cortical alignments. Hum Brain Mapp 2013; 35:1544-61. [PMID: 23633425 PMCID: PMC6868999 DOI: 10.1002/hbm.22272] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2012] [Revised: 12/06/2012] [Accepted: 01/17/2013] [Indexed: 11/13/2022] Open
Abstract
Although orderly representations of sound frequency in the brain play a guiding role in the investigation of auditory processing, a rigorous statistical evaluation of cortical tonotopic maps has so far hardly been attempted. In this report, the group‐level significance of local tonotopic gradients was assessed using mass‐multivariate statistics. The existence of multiple fields on the superior surface of the temporal lobe in both hemispheres was shown. These fields were distinguishable on the basis of tonotopic gradient direction and may likely be identified with the human homologues of the core areas AI and R in primates. Moreover, an objective comparison was made between the usage of volumetric and surface‐based registration methods. Although the surface‐based method resulted in a better registration across subjects of the grey matter segment as a whole, the alignment of functional subdivisions within the cortical sheet did not appear to improve over volumetric methods. This suggests that the variable relationship between the structural and the functional characteristics of auditory cortex is a limiting factor that cannot be overcome by morphology‐based registration techniques alone. Finally, to illustrate how the proposed approach may be used in clinical practice, the method was used to test for focal differences regarding the tonotopic arrangements in healthy controls and tinnitus patients. No significant differences were observed, suggesting that tinnitus does not necessarily require tonotopic reorganisation to occur. Hum Brain Mapp 35:1544–1561, 2014. © 2013 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Dave R M Langers
- National Institute for Health Research Nottingham Hearing Biomedical Research Unit, School of Clinical Sciences, University of Nottingham, Queen's Medical Centre, Nottingham, United Kingdom; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
26
|
Peelle JE. The hemispheric lateralization of speech processing depends on what "speech" is: a hierarchical perspective. Front Hum Neurosci 2012; 6:309. [PMID: 23162455 PMCID: PMC3499798 DOI: 10.3389/fnhum.2012.00309] [Citation(s) in RCA: 85] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Accepted: 10/25/2012] [Indexed: 11/13/2022] Open
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
27
|
Peelle JE, Davis MH. Neural Oscillations Carry Speech Rhythm through to Comprehension. Front Psychol 2012; 3:320. [PMID: 22973251 PMCID: PMC3434440 DOI: 10.3389/fpsyg.2012.00320] [Citation(s) in RCA: 295] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Accepted: 08/11/2012] [Indexed: 11/17/2022] Open
Abstract
A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners' processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging - particularly electroencephalography (EEG) and magnetoencephalography (MEG) - point to phase locking by ongoing cortical oscillations to low-frequency information (~4-8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain.
Collapse
Affiliation(s)
- Jonathan E. Peelle
- Center for Cognitive Neuroscience and Department of Neurology, University of PennsylvaniaPhiladelphia, PA, USA
| | - Matthew H. Davis
- Medical Research Council Cognition and Brain Sciences UnitCambridge, UK
| |
Collapse
|
28
|
Luo H, Poeppel D. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex. Front Psychol 2012; 3:170. [PMID: 22666214 PMCID: PMC3364513 DOI: 10.3389/fpsyg.2012.00170] [Citation(s) in RCA: 96] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2012] [Accepted: 05/10/2012] [Indexed: 11/16/2022] Open
Abstract
Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (∼20-80 ms duration information) and the theta band (∼150-300 ms), corresponding to segmental and diphonic versus syllabic modulation rates, respectively. It has been hypothesized that auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that such non-speech stimuli with temporal structure matching speech-relevant scales (∼25 and ∼200 ms) elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands). In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST). The data argue for a mesoscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.
Collapse
Affiliation(s)
- Huan Luo
- State Key Laboratory of Brain and Cognitive Sciences, Institute of Biophysics, Chinese Academy of SciencesBeijing, China
| | - David Poeppel
- Department of Psychology, New York UniversityNew York, NY, USA
| |
Collapse
|
29
|
Wang Y, Ding N, Ahmar N, Xiang J, Poeppel D, Simon JZ. Sensitivity to temporal modulation rate and spectral bandwidth in the human auditory system: MEG evidence. J Neurophysiol 2011; 107:2033-41. [PMID: 21975451 DOI: 10.1152/jn.00310.2011] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Slow acoustic modulations below 20 Hz, of varying bandwidths, are dominant components of speech and many other natural sounds. The dynamic neural representations of these modulations are difficult to study through noninvasive neural-recording methods, however, because of the omnipresent background of slow neural oscillations throughout the brain. We recorded the auditory steady-state responses (aSSR) to slow amplitude modulations (AM) from 14 human subjects using magnetoencephalography. The responses to five AM rates (1.5, 3.5, 7.5, 15.5, and 31.5 Hz) and four types of carrier (pure tone and 1/3-, 2-, and 5-octave pink noise) were investigated. The phase-locked aSSR was detected reliably in all conditions. The response power generally decreases with increasing modulation rate, and the response latency is between 100 and 150 ms for all but the highest rates. Response properties depend only weakly on the bandwidth. Analysis of the complex-valued aSSR magnetic fields in the Fourier domain reveals several neural sources with different response phases. These neural sources of the aSSR, when approximated by a single equivalent current dipole (ECD), are distinct from and medial to the ECD location of the N1m response. These results demonstrate that the globally synchronized activity in the human auditory cortex is phase locked to slow temporal modulations below 30 Hz, and the neural sensitivity decreases with an increasing AM rate, with relative insensitivity to bandwidth.
Collapse
Affiliation(s)
- Yadong Wang
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, USA
| | | | | | | | | | | |
Collapse
|