1
|
Muñoz V, Muñoz-Caracuel M, Angulo-Ruiz BY, Gómez CM. Neurovascular coupling during auditory stimulation: event-related potentials and fNIRS hemodynamic. Brain Struct Funct 2023; 228:1943-1961. [PMID: 37658858 PMCID: PMC10517045 DOI: 10.1007/s00429-023-02698-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/12/2023] [Indexed: 09/05/2023]
Abstract
Intensity-dependent amplitude changes (IDAP) have been extensively studied using event-related potentials (ERPs) and have been linked to several psychiatric disorders. This study aims to explore the application of functional near-infrared spectroscopy (fNIRS) in IDAP paradigms, which related to ERPs could indicate the existence of neurovascular coupling. Thirty-three and thirty-one subjects participated in two experiments, respectively. The first experiment consisted of the presentation of three-tone intensities (77.9 dB, 84.5 dB, and 89.5 dB) lasting 500 ms, each type randomly presented 54 times, while the second experiment consisted of the presentation of five-tone intensities (70.9 dB, 77.9 dB, 84.5 dB, 89.5 dB, and 94.5 dB) in trains of 8 tones lasting 70 ms each tone, the trains were presented 20 times. EEG was used to measure ERP components: N1, P2, and N1-P2 peak-to-peak amplitude. fNIRS allowed the analysis of the hemodynamic activity in the auditory, visual, and prefrontal cortices. The results showed an increase in N1, P2, and N1-P2 peak-to-peak amplitude with auditory intensity. Similarly, oxyhemoglobin and deoxyhemoglobin concentrations showed amplitude increases and decreases, respectively, with auditory intensity in the auditory and prefrontal cortices. Spearman correlation analysis showed a relationship between the left auditory cortex with N1 amplitude, and the right dorsolateral cortex with P2 amplitude, specifically for deoxyhemoglobin concentrations. These findings suggest that there is a brain response to auditory intensity changes that can be obtained by EEG and fNIRS, supporting the neurovascular coupling process. Overall, this study enhances our understanding of fNIRS application in auditory paradigms and highlights its potential as a complementary technique to ERPs.
Collapse
Affiliation(s)
- Vanesa Muñoz
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Seville, Spain
| | - Manuel Muñoz-Caracuel
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Seville, Spain
- Hospital Universitario Virgen del Rocio, Seville, Spain
| | - Brenda Y. Angulo-Ruiz
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Seville, Spain
| | - Carlos M. Gómez
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Seville, Spain
| |
Collapse
|
2
|
Muñoz V, Diaz‐Sanchez JA, Muñoz‐Caracuel M, Gómez CM. Head hemodynamics and systemic responses during auditory stimulation. Physiol Rep 2022; 10:e15372. [PMID: 35785451 PMCID: PMC9251853 DOI: 10.14814/phy2.15372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 06/09/2022] [Accepted: 06/14/2022] [Indexed: 06/15/2023] Open
Abstract
The present study aims to analyze the systemic response to auditory stimulation by means of hemodynamic (cephalic and peripheral) and autonomic responses in a broad range of auditory intensities (70.9, 77.9, 84.5, 89.5, 94.5 dBA). This approach could help to understand the possible influence of the autonomic nervous system on the cephalic blood flow. Twenty-five subjects were exposed to auditory stimulation while electrodermal activity (EDA), photoplethysmography (PPG), electrocardiogram, and functional near-infrared spectroscopy signals were recorded. Seven trials with 20 individual tones, each for the five intensities, were presented. The results showed a differentiated response to the higher intensity (94.5 dBA) with a decrease in some peripheral signals such as the heart rate (HR), the pulse signal, the pulse transit time (PTT), an increase of the LFnu power in PPG, and at the head level a decrease in oxygenated and total hemoglobin concentration. After the regression of the visual channel activity from the auditory channels, a decrease in deoxyhemoglobin in the auditory cortex was obtained, indicating a likely active response at the highest intensity. Nevertheless, other measures, such as EDA (Phasic and Tonic), and heart rate variability (Frequency and time domain) showed no significant differences between intensities. Altogether, these results suggest a systemic and complex response to high-intensity auditory stimuli. The results obtained in the decrease of the PTT and the increase in LFnu power of PPG suggest a possible vasoconstriction reflex by a sympathetic control of vascular tone, which could be related to the decrease in blood oxygenation at the head level.
Collapse
Affiliation(s)
- Vanesa Muñoz
- Human Psychobiology Laboratory, Experimental Psychology DepartmentUniversity of SevillaSevillaSpain
| | - José A. Diaz‐Sanchez
- Human Psychobiology Laboratory, Experimental Psychology DepartmentUniversity of SevillaSevillaSpain
| | - Manuel Muñoz‐Caracuel
- Human Psychobiology Laboratory, Experimental Psychology DepartmentUniversity of SevillaSevillaSpain
| | - Carlos M. Gómez
- Human Psychobiology Laboratory, Experimental Psychology DepartmentUniversity of SevillaSevillaSpain
| |
Collapse
|
3
|
Abstract
OBJECTIVES Functional near-infrared spectroscopy (fNIRS) is a brain imaging technique particularly suitable for hearing studies. However, the nature of fNIRS responses to auditory stimuli presented at different stimulus intensities is not well understood. In this study, we investigated whether fNIRS response amplitude was better predicted by stimulus properties (intensity) or individually perceived attributes (loudness). DESIGN Twenty-two young adults were included in this experimental study. Four different stimulus intensities of a broadband noise were used as stimuli. First, loudness estimates for each stimulus intensity were measured for each participant. Then, the 4 stimulation intensities were presented in counterbalanced order while recording hemoglobin saturation changes from cortical auditory brain areas. The fNIRS response was analyzed in a general linear model design, using 3 different regressors: a non-modulated, an intensity-modulated, and a loudness-modulated regressor. RESULTS Higher intensity stimuli resulted in higher amplitude fNIRS responses. The relationship between stimulus intensity and fNIRS response amplitude was better explained using a regressor based on individually estimated loudness estimates compared with a regressor modulated by stimulus intensity alone. CONCLUSIONS Brain activation in response to different stimulus intensities is more reliant upon individual loudness sensation than physical stimulus properties. Therefore, in measurements using different auditory stimulus intensities or subjective hearing parameters, loudness estimates should be examined when interpreting results.
Collapse
|
4
|
Boos M, Lücke J, Rieger JW. Generalizable dimensions of human cortical auditory processing of speech in natural soundscapes: A data-driven ultra high field fMRI approach. Neuroimage 2021; 237:118106. [PMID: 33991696 DOI: 10.1016/j.neuroimage.2021.118106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/25/2021] [Indexed: 11/27/2022] Open
Abstract
Speech comprehension in natural soundscapes rests on the ability of the auditory system to extract speech information from a complex acoustic signal with overlapping contributions from many sound sources. Here we reveal the canonical processing of speech in natural soundscapes on multiple scales by using data-driven modeling approaches to characterize sounds to analyze ultra high field fMRI recorded while participants listened to the audio soundtrack of a movie. We show that at the functional level the neuronal processing of speech in natural soundscapes can be surprisingly low dimensional in the human cortex, highlighting the functional efficiency of the auditory system for a seemingly complex task. Particularly, we find that a model comprising three functional dimensions of auditory processing in the temporal lobes is shared across participants' fMRI activity. We further demonstrate that the three functional dimensions are implemented in anatomically overlapping networks that process different aspects of speech in natural soundscapes. One is most sensitive to complex auditory features present in speech, another to complex auditory features and fast temporal modulations, that are not specific to speech, and one codes mainly sound level. These results were derived with few a-priori assumptions and provide a detailed and computationally reproducible account of the cortical activity in the temporal lobe elicited by the processing of speech in natural soundscapes.
Collapse
Affiliation(s)
- Moritz Boos
- Applied Neurocognitive Psychology Lab, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany.
| | - Jörg Lücke
- Machine Learning Division, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany
| | - Jochem W Rieger
- Applied Neurocognitive Psychology Lab, University of Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
5
|
Uppenkamp S. Functional neuroimaging in hearing research and audiology. Z Med Phys 2021; 31:289-304. [PMID: 33947621 DOI: 10.1016/j.zemedi.2021.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/11/2021] [Accepted: 03/09/2021] [Indexed: 11/17/2022]
Abstract
The various methods of medical imaging are essential for many diagnostic issues in clinical routine, e.g., for the diagnostics and localisation of tumorous diseases, or for the clarification of other lesions in the central nervous system. In addition to these classical roles both positron emission tomography (PET) and magnetic resonance imaging (MRI) allow for the investigation of functional processes in the human brain, when used in a specific way. The last 25 years have seen great progress, especially with respect to functional MRI, in terms of the available experimental paradigms as well as the data analysis strategies, so that a directed investigation of neurophysiological correlates of psychoacoustic performance is possible. This covers fundamental measures of sound perception like loudness and pitch, specific audiological symptoms like tinnitus, which often accompanies hearing disorders, but it also includes experiments on speech perception or on virtual acoustic environments. One important aspect common to many auditory neuroimaging studies is the central question at what stage in the human auditory pathway the sensory coding of the incoming sound is transformed into a universal and context-dependent perceptual representation, which is the basis for what we hear. This overview summarises findings from the literature as well as a few studies from our lab, to discuss the possibilities and the limits of the adoption of functional neuroimaging methods in audiology. Up to this stage, most auditory neuroimaging studies have investigated basic processes in normal hearing listeners. However, the hitherto existing results suggest that the methods of auditory functional neuroimaging - possibly complemented by electrophysiological methods like EEG and MEG - have a great potential to contribute to a deeper understanding of the processes and the impact of hearing disorders.
Collapse
Affiliation(s)
- Stefan Uppenkamp
- Medizinische Physik, Fakultät VI Medizin und Gesundheitswissenschaften Carl von Ossietzky Universität, 26111 Oldenburg, Germany.
| |
Collapse
|
6
|
Hsieh IH, Yeh WT. The Interaction Between Timescale and Pitch Contour at Pre-attentive Processing of Frequency-Modulated Sweeps. Front Psychol 2021; 12:637289. [PMID: 33833720 PMCID: PMC8021897 DOI: 10.3389/fpsyg.2021.637289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 02/17/2021] [Indexed: 11/30/2022] Open
Abstract
Speech comprehension across languages depends on encoding the pitch variations in frequency-modulated (FM) sweeps at different timescales and frequency ranges. While timescale and spectral contour of FM sweeps play important roles in differentiating acoustic speech units, relatively little work has been done to understand the interaction between the two acoustic dimensions at early cortical processing. An auditory oddball paradigm was employed to examine the interaction of timescale and pitch contour at pre-attentive processing of FM sweeps. Event-related potentials to frequency sweeps that vary in linguistically relevant pitch contour (fundamental frequency F0 vs. first formant frequency F1) and timescale (local vs. global) in Mandarin Chinese were recorded. Mismatch negativities (MMNs) were elicited by all types of sweep deviants. For local timescale, FM sweeps with F0 contours yielded larger MMN amplitudes than F1 contours. A reversed MMN amplitude pattern was obtained with respect to F0/F1 contours for global timescale stimuli. An interhemispheric asymmetry of MMN topography was observed corresponding to local and global-timescale contours. Falling but not rising frequency difference waveforms sweep contours elicited right hemispheric dominance. Results showed that timescale and pitch contour interacts with each other in pre-attentive auditory processing of FM sweeps. Findings suggest that FM sweeps, a type of non-speech signal, is processed at an early stage with reference to its linguistic function. That the dynamic interaction between timescale and spectral pattern is processed during early cortical processing of non-speech frequency sweep signal may be critical to facilitate speech encoding at a later stage.
Collapse
Affiliation(s)
- I-Hui Hsieh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| | - Wan-Ting Yeh
- Institute of Cognitive Neuroscience, National Central University, Taoyuan City, Taiwan
| |
Collapse
|
7
|
Muñoz-Caracuel M, Muñoz V, Ruíz-Martínez FJ, Di Domenico D, Brigadoi S, Gómez CM. Multivariate analysis of the systemic response to auditory stimulation: An integrative approach. Exp Physiol 2021; 106:1072-1098. [PMID: 33624899 DOI: 10.1113/ep089125] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 02/18/2021] [Indexed: 11/08/2022]
Abstract
NEW FINDINGS What is the central question of this study? Auditory stimulation produces a response in different physiological systems: cardiac, peripheral blood flow, electrodermal, cortical and peripheral haemodynamic responses and auditory event-related potentials. Do all these subsystems covary when responding to auditory stimulation, suggesting a unified locus of control, or do they not covary, suggesting independent loci of control for these physiological responses? What is the main finding and its importance? Auditory sensory gating reached a fixed level of neural activity independently of the intensity of auditory stimulation. The use of multivariate techniques revealed the presence of different regulatory mechanisms for the different physiologically recorded signals. ABSTRACT We studied the effects of an increasing amplitude of auditory stimulation on a variety of autonomic and CNS responses and their possible interdependence. The subjects were stimulated with an increasing amplitude of auditory tones while the auditory event-related potentials (ERPs), the cortical and extracerebral functional near-infrared spectroscopy (fNIRS) signal of standard and short separation channel recordings, the peripheral pulse measured by photoplethysmography, heart rate and electrodermal responses were recorded. Trials with eight tones of equal amplitude were presented. The results showed a parallel increase of activity in ERPs, fNIRS and peripheral responses with the increase in intensity of auditory stimulation. The ERPs, measured as peak-to-peak N1-P2, showed an increase in amplitude with auditory stimulation and a high attenuation from the first presentation with respect to the second to eighth presentations. Peripheral signals and standard and short channel fNIRS responses showed a decrease in amplitude in the high-intensity auditory stimulation conditions. Principal components analysis showed independent sources of variance for the recorded signals, suggesting independent control of the recorded physiological responses. The present results suggest a complex response associated to the increase of auditory stimulation with a fixed amplitude for ERPs, and a decrease in the peripheral and cortical haemodynamic response, possibly mediated by activation of the sympathetic nervous system, constituting a defensive reflex to excessive auditory stimulation.
Collapse
Affiliation(s)
- Manuel Muñoz-Caracuel
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Sevilla, Spain
| | - Vanesa Muñoz
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Sevilla, Spain
| | - Francisco J Ruíz-Martínez
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Sevilla, Spain
| | - Dalila Di Domenico
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Sevilla, Spain.,Department of Developmental and Social Psychology, University of Padova, Via Venezia, Padova, Italy
| | - Sabrina Brigadoi
- Department of Developmental and Social Psychology, University of Padova, Via Venezia, Padova, Italy.,Department of Information Engineering, University of Padova, Via Gradenigo, Padova, Italy
| | - Carlos M Gómez
- Human Psychobiology Laboratory, Experimental Psychology Department, University of Sevilla, Sevilla, Spain
| |
Collapse
|
8
|
Behler O, Uppenkamp S. Activation in human auditory cortex in relation to the loudness and unpleasantness of low-frequency and infrasound stimuli. PLoS One 2020; 15:e0229088. [PMID: 32084171 PMCID: PMC7034801 DOI: 10.1371/journal.pone.0229088] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/29/2020] [Indexed: 11/18/2022] Open
Abstract
Low frequency noise (LFS) and infrasound (IS) are controversially discussed as potential causes of annoyance and distress experienced by many people. However, the perception mechanisms for IS in the human auditory system are not completely understood yet. In the present study, sinusoids at 32 Hz (at the lower limit of melodic pitch for tonal stimulation), as well as 8 Hz (IS range) were presented to a group of 20 normal hearing subjects, using monaural stimulation via a loudspeaker sound source coupled to the ear canal by a long silicone rubber tube. Each participant attended two experimental sessions. In the first session, participants performed a categorical loudness scaling procedure as well as an unpleasantness rating task in a sound booth. In the second session, the loudness scaling procedure was repeated while brain activation was measured using functional magnetic resonance imaging (fMRI). Subsequently, activation data were collected for the respective stimuli presented at fixed levels adjusted to the individual loudness judgments. Silent trials were included as a baseline condition. Our results indicate that the brain regions involved in processing LFS and IS are similar to those for sounds in the typical audio frequency range, i.e., mainly primary and secondary auditory cortex (AC). In spite of large variation across listeners with respect to judgments of loudness and unpleasantness, neural correlates of these interindividual differences could not yet be identified. Still, for individual listeners, fMRI activation in the AC was more closely related to individual perception than to the physical stimulus level.
Collapse
Affiliation(s)
- Oliver Behler
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- * E-mail:
| | - Stefan Uppenkamp
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
9
|
Kuo PC, Tseng YL, Zilles K, Suen S, Eickhoff SB, Lee JD, Cheng PE, Liou M. Brain dynamics and connectivity networks under natural auditory stimulation. Neuroimage 2019; 202:116042. [PMID: 31344485 DOI: 10.1016/j.neuroimage.2019.116042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/17/2019] [Accepted: 07/20/2019] [Indexed: 02/03/2023] Open
Abstract
The analysis of functional magnetic resonance imaging (fMRI) data is challenging when subjects are under exposure to natural sensory stimulation. In this study, a two-stage approach was developed to enable the identification of connectivity networks involved in the processing of information in the brain under natural sensory stimulation. In the first stage, the degree of concordance between the results of inter-subject and intra-subject correlation analyses is assessed statistically. The microstructurally (i.e., cytoarchitectonically) defined brain areas are designated either as concordant in which the results of both correlation analyses are in agreement, or as discordant in which one analysis method shows a higher proportion of supra-threshold voxels than does the other. In the second stage, connectivity networks are identified using the time courses of supra-threshold voxels in brain areas contingent upon the classifications derived in the first stage. In an empirical study, fMRI data were collected from 40 young adults (19 males, average age 22.76 ± 3.25), who underwent auditory stimulation involving sound clips of human voices and animal vocalizations under two operational conditions (i.e., eyes-closed and eyes-open). The operational conditions were designed to assess confounding effects due to auditory instructions or visual perception. The proposed two-stage analysis demonstrated that stress modulation (affective) and language networks in the limbic and cortical structures were respectively engaged during sound stimulation, and presented considerable variability among subjects. The network involved in regulating visuomotor control was sensitive to the eyes-open instruction, and presented only small variations among subjects. A high degree of concordance was observed between the two analyses in the primary auditory cortex which was highly sensitive to the pitch of sound clips. Our results have indicated that brain areas can be identified as concordant or discordant based on the two correlation analyses. This may further facilitate the search for connectivity networks involved in the processing of information under natural sensory stimulation.
Collapse
Affiliation(s)
- Po-Chih Kuo
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Yi-Li Tseng
- Department of Electrical Engineering, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Summit Suen
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
| | - Juin-Der Lee
- Graduate Institute of Business Administration, National Chengchi University, Taipei, Taiwan
| | - Philip E Cheng
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Michelle Liou
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan.
| |
Collapse
|
10
|
Adelhöfer N, Gohil K, Passow S, Beste C, Li SC. Lateral prefrontal anodal transcranial direct current stimulation augments resolution of auditory perceptual-attentional conflicts. Neuroimage 2019; 199:217-227. [DOI: 10.1016/j.neuroimage.2019.05.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 05/01/2019] [Accepted: 05/04/2019] [Indexed: 01/24/2023] Open
|
11
|
Yakunina N, Tae WS, Kim SS, Nam EC. Functional MRI evidence of the cortico-olivary efferent pathway during active auditory target processing in humans. Hear Res 2019; 379:1-11. [DOI: 10.1016/j.heares.2019.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 04/11/2019] [Accepted: 04/16/2019] [Indexed: 01/14/2023]
|
12
|
Crommett LE, Madala D, Yau JM. Multisensory perceptual interactions between higher-order temporal frequency signals. J Exp Psychol Gen 2018; 148:1124-1137. [PMID: 30335446 DOI: 10.1037/xge0000513] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Naturally occurring signals in audition and touch can be complex and marked by temporal variations in frequency and amplitude. Auditory frequency sweep processing has been studied extensively; however, much less is known about sweep processing in touch because studies have primarily focused on the perception of simple sinusoidal vibrations. Given the extensive interactions between audition and touch in the frequency processing of pure tone signals, we reasoned that these senses might also interact in the processing of higher-order frequency representations like sweeps. In a series of psychophysical experiments, we characterized the influence of auditory distractors on the ability of participants to discriminate tactile frequency sweeps. Auditory frequency sweeps systematically biased the tactile perception of sweep direction. Importantly, auditory cues exerted little influence on tactile sweep direction perception when the sounds and vibrations occupied different absolute frequency ranges or when the sounds consisted of intensity sweeps. Thus, audition and touch interact in frequency sweep perception in a frequency- and feature-specific manner. Our results demonstrate that audio-tactile interactions are not constrained to the processing of simple sinusoids. Because higher-order frequency representations may be synthesized from simpler representations, our findings imply that multisensory interactions in the temporal frequency domain span multiple hierarchical levels in sensory processing. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Deeksha Madala
- Department of Biochemistry and Cell Biology, Rice University
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine
| |
Collapse
|
13
|
Weder S, Zhou X, Shoushtarian M, Innes-Brown H, McKay C. Cortical Processing Related to Intensity of a Modulated Noise Stimulus-a Functional Near-Infrared Study. J Assoc Res Otolaryngol 2018; 19:273-286. [PMID: 29633049 PMCID: PMC5962476 DOI: 10.1007/s10162-018-0661-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 02/19/2018] [Indexed: 12/30/2022] Open
Abstract
Sound intensity is a key feature of auditory signals. A profound understanding of cortical processing of this feature is therefore highly desirable. This study investigates whether cortical functional near-infrared spectroscopy (fNIRS) signals reflect sound intensity changes and where on the brain cortex maximal intensity-dependent activations are located. The fNIRS technique is particularly suitable for this kind of hearing study, as it runs silently. Twenty-three normal hearing subjects were included and actively participated in a counterbalanced block design task. Four intensity levels of a modulated noise stimulus with long-term spectrum and modulation characteristics similar to speech were applied, evenly spaced from 15 to 90 dB SPL. Signals from auditory processing cortical fields were derived from a montage of 16 optodes on each side of the head. Results showed that fNIRS responses originating from auditory processing areas are highly dependent on sound intensity level: higher stimulation levels led to higher concentration changes. Caudal and rostral channels showed different waveform morphologies, reflecting specific cortical signal processing of the stimulus. Channels overlying the supramarginal and caudal superior temporal gyrus evoked a phasic response, whereas channels over Broca's area showed a broad tonic pattern. This data set can serve as a foundation for future auditory fNIRS research to develop the technique as a hearing assessment tool in the normal hearing and hearing-impaired populations.
Collapse
Affiliation(s)
- Stefan Weder
- The Bionics Institute, East Melbourne, Australia.
- Department of ENT, Head and Neck Surgery, Inselspital, Bern University Hospital, Bern, Switzerland.
| | - Xin Zhou
- The Bionics Institute, East Melbourne, Australia
- Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| | | | - Hamish Innes-Brown
- The Bionics Institute, East Melbourne, Australia
- Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| | - Colette McKay
- The Bionics Institute, East Melbourne, Australia
- Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
14
|
Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss. Hear Res 2017; 355:81-96. [DOI: 10.1016/j.heares.2017.09.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 07/28/2017] [Accepted: 09/23/2017] [Indexed: 01/09/2023]
|
15
|
Angenstein N, Brechmann A. Effect of sequential comparison on active processing of sound duration. Hum Brain Mapp 2017; 38:4459-4469. [PMID: 28580585 DOI: 10.1002/hbm.23673] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2017] [Accepted: 05/22/2017] [Indexed: 11/06/2022] Open
Abstract
Previous studies on active duration processing on sounds showed opposing results regarding the predominant involvement of the left or right hemisphere. Duration of an acoustic event is normally judged relative to other sounds. This requires sequential comparison as auditory events unfold over time. We hypothesized that increasing the demand on sequential comparison in a task increases the involvement of the left auditory cortex. With the current fMRI study, we investigated the effect of sequential comparison in active duration discrimination by comparing a categorical with a comparative task. During the categorical task, the participant had to categorize the tones according to their duration (short vs long). During the comparative task, they had to decide for each tone whether its length matched the tone presented before. We used the contralateral noise procedure to reveal the degree of participation of the left and right auditory cortex during these tasks. We found that both tasks more strongly involve the left than the right auditory cortex. Furthermore, the left auditory cortex was more strongly involved during comparison than during categorization. Together with previous studies, this suggests that additional demand for sequential comparison during processing of different basic acoustic parameters leads to an increased recruitment of the left auditory cortex. In addition, the comparison task more strongly involved several brain areas outside the auditory cortex, which may also be related to the demand for additional cognitive resources as compared to the more efficient categorization of sounds. Hum Brain Mapp 38:4459-4469, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Nicole Angenstein
- Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, 39118, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, 39118, Germany
| |
Collapse
|
16
|
Behler O, Uppenkamp S. The representation of level and loudness in the central auditory system for unilateral stimulation. Neuroimage 2016; 139:176-188. [PMID: 27318216 DOI: 10.1016/j.neuroimage.2016.06.025] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Revised: 05/24/2016] [Accepted: 06/14/2016] [Indexed: 10/21/2022] Open
Abstract
Loudness is the perceptual correlate of the physical intensity of a sound. However, loudness judgments depend on a variety of other variables and can vary considerably between individual listeners. While functional magnetic resonance imaging (fMRI) has been extensively used to characterize the neural representation of physical sound intensity in the human auditory system, only few studies have also investigated brain activity in relation to individual loudness. The physiological correlate of loudness perception is not yet fully understood. The present study systematically explored the interrelation of sound pressure level, ear of entry, individual loudness judgments, and fMRI activation along different stages of the central auditory system and across hemispheres for a group of normal hearing listeners. 4-kHz-bandpass filtered noise stimuli were presented monaurally to each ear at levels from 37 to 97dB SPL. One diotic condition and a silence condition were included as control conditions. The participants completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound level and loudness estimates were analyzed by means of functional activation maps and linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. Our findings are overall in line with the notion that fMRI activation in several regions within auditory cortex as well as in certain stages of the ascending auditory pathway might be more a direct linear reflection of perceived loudness rather than of sound pressure level. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of level and loudness.1.
Collapse
Affiliation(s)
- Oliver Behler
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| | - Stefan Uppenkamp
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany; Cluster of Excellence Hearing4All, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany.
| |
Collapse
|
17
|
Behler O, Uppenkamp S. Auditory fMRI of Sound Intensity and Loudness for Unilateral Stimulation. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2016; 894:165-174. [DOI: 10.1007/978-3-319-25474-6_18] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
18
|
Greenlee JDW, Behroozmand R, Nourski KV, Oya H, Kawasaki H, Howard MA. Using speech and electrocorticography to map human auditory cortex. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:6798-801. [PMID: 25571557 DOI: 10.1109/embc.2014.6945189] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Much less is known about the organization of the human auditory cortex compared to non-human primate auditory cortices. In an effort to further investigate the response properties of human auditory cortex, we present preliminary findings from human subjects implanted with depth electrodes in Heschl's gyrus (HG) as part of their neurosurgical treatment of epilepsy. Each subject had electrocorticography (ECoG) responses taken from medial and lateral HG in response to both speech and non-speech stimuli, including during speech production. Responses were somewhat variable across subjects, but posteromedial HG demonstrated frequency following responses to the stimuli in all subjects to some degree. Results and implications are discussed.
Collapse
|
19
|
Auditory intensity processing: Categorization versus comparison. Neuroimage 2015; 119:362-70. [DOI: 10.1016/j.neuroimage.2015.06.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 06/23/2015] [Accepted: 06/25/2015] [Indexed: 11/18/2022] Open
|
20
|
Schreiner CE, Malone BJ. Representation of loudness in the auditory cortex. HANDBOOK OF CLINICAL NEUROLOGY 2015; 129:73-84. [PMID: 25726263 DOI: 10.1016/b978-0-444-62630-1.00004-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Changes in stimulus intensity are reflected in changes in the fundamental perceptual attribute of loudness. Stimulus intensity changes also profoundly impact the evoked neural responses throughout the auditory system. A fundamental question is how measurements of neural activity, from the single-neuron level to mass-activity metrics such as functional magnetic resonance imaging or magnetoencephalography, reflect the physical properties of stimulus intensity as opposed to perceived loudness. In this chapter we discuss findings from psychophysics and animal neurophysiology as well as human brain activity measurements to clarify our current understanding of the neural mechanisms that contribute to the perceptual correlate of stimulus intensity.
Collapse
Affiliation(s)
- Christoph E Schreiner
- Center for Integrative Neuroscience and Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA.
| | - Brian J Malone
- Center for Integrative Neuroscience and Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
21
|
Wyss C, Boers F, Kawohl W, Arrubla J, Vahedipour K, Dammers J, Neuner I, Shah N. Spatiotemporal properties of auditory intensity processing in multisensor MEG. Neuroimage 2014; 102 Pt 2:465-73. [DOI: 10.1016/j.neuroimage.2014.08.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2014] [Revised: 07/26/2014] [Accepted: 08/05/2014] [Indexed: 12/27/2022] Open
|
22
|
Potes C, Brunner P, Gunduz A, Knight RT, Schalk G. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing. Neuroimage 2014; 97:188-95. [PMID: 24768933 PMCID: PMC4065821 DOI: 10.1016/j.neuroimage.2014.04.045] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2013] [Revised: 03/22/2014] [Accepted: 04/13/2014] [Indexed: 11/24/2022] Open
Abstract
Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing.
Collapse
Affiliation(s)
- Cristhian Potes
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA; Department of Electrical and Computer Engineering, University of Texas at El Paso, TX, USA.
| | - Peter Brunner
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA; Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Computer Science, Graz University of Technology, Graz, Austria.
| | - Aysegul Gunduz
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA; Department of Neurology, Albany Medical College, Albany, NY, USA; J. Crayton Pruitt Family, Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA.
| | - Robert T Knight
- Department of Psychology, University of California at Berkeley, CA, USA.
| | - Gerwin Schalk
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA; Department of Electrical and Computer Engineering, University of Texas at El Paso, TX, USA; Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Biomedical Science, State University of NY at Albany, Albany, NY, USA.
| |
Collapse
|
23
|
Langers DRM, Krumbholz K, Bowtell RW, Hall DA. Neuroimaging paradigms for tonotopic mapping (I): the influence of sound stimulus type. Neuroimage 2014; 100:650-62. [PMID: 25069046 PMCID: PMC5548253 DOI: 10.1016/j.neuroimage.2014.07.044] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2014] [Revised: 07/18/2014] [Accepted: 07/21/2014] [Indexed: 11/16/2022] Open
Abstract
Although a consensus is emerging in the literature regarding the tonotopic organisation of auditory cortex in humans, previous studies employed a vast array of different neuroimaging protocols. In the present functional magnetic resonance imaging (fMRI) study, we made a systematic comparison between stimulus protocols involving jittered tone sequences with either a narrowband, broadband, or sweep character in order to evaluate their suitability for the purpose of tonotopic mapping. Data-driven analysis techniques were used to identify cortical maps related to sound-evoked activation and tonotopic frequency tuning. Principal component analysis (PCA) was used to extract the dominant response patterns in each of the three protocols separately, and generalised canonical correlation analysis (CCA) to assess the commonalities between protocols. Generally speaking, all three types of stimuli evoked similarly distributed response patterns and resulted in qualitatively similar tonotopic maps. However, quantitatively, we found that broadband stimuli are most efficient at evoking responses in auditory cortex, whereas narrowband and sweep stimuli offer the best sensitivity to differences in frequency tuning. Based on these results, we make several recommendations regarding optimal stimulus protocols, and conclude that an experimental design based on narrowband stimuli provides the best sensitivity to frequency-dependent responses to determine tonotopic maps. We forward that the resulting protocol is suitable to act as a localiser of tonotopic cortical fields in individuals, or to make quantitative comparisons between maps in dedicated tonotopic mapping studies.
Collapse
Affiliation(s)
- Dave R M Langers
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, University of Nottingham, Nottingham, UK; Otology and Hearing group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK.
| | | | - Richard W Bowtell
- Sir Peter Mansfield Magnetic Resonance Centre, School of Physics and Astronomy, University of Nottingham, Nottingham, UK
| | - Deborah A Hall
- National Institute for Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, University of Nottingham, Nottingham, UK; Otology and Hearing group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
24
|
Bethmann A, Brechmann A. On the definition and interpretation of voice selective activation in the temporal cortex. Front Hum Neurosci 2014; 8:499. [PMID: 25071527 PMCID: PMC4086026 DOI: 10.3389/fnhum.2014.00499] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 06/19/2014] [Indexed: 11/15/2022] Open
Abstract
Regions along the superior temporal sulci and in the anterior temporal lobes have been found to be involved in voice processing. It has even been argued that parts of the temporal cortices serve as voice-selective areas. Yet, evidence for voice-selective activation in the strict sense is still missing. The current fMRI study aimed at assessing the degree of voice-specific processing in different parts of the superior and middle temporal cortices. To this end, voices of famous persons were contrasted with widely different categories, which were sounds of animals and musical instruments. The argumentation was that only brain regions with statistically proven absence of activation by the control stimuli may be considered as candidates for voice-selective areas. Neural activity was found to be stronger in response to human voices in all analyzed parts of the temporal lobes except for the middle and posterior STG. More importantly, the activation differences between voices and the other environmental sounds increased continuously from the mid-posterior STG to the anterior MTG. Here, only voices but not the control stimuli excited an increase of the BOLD response above a resting baseline level. The findings are discussed with reference to the function of the anterior temporal lobes in person recognition and the general question on how to define selectivity of brain regions for a specific class of stimuli or tasks. In addition, our results corroborate recent assumptions about the hierarchical organization of auditory processing building on a processing stream from the primary auditory cortices to anterior portions of the temporal lobes.
Collapse
Affiliation(s)
- Anja Bethmann
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - André Brechmann
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| |
Collapse
|
25
|
Angenstein N, Brechmann A. Division of labor between left and right human auditory cortices during the processing of intensity and duration. Neuroimage 2013; 83:1-11. [DOI: 10.1016/j.neuroimage.2013.06.071] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Revised: 06/07/2013] [Accepted: 06/25/2013] [Indexed: 10/26/2022] Open
|
26
|
Uppenkamp S, Röhl M. Human auditory neuroimaging of intensity and loudness. Hear Res 2013; 307:65-73. [PMID: 23973563 DOI: 10.1016/j.heares.2013.08.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2013] [Revised: 08/09/2013] [Accepted: 08/12/2013] [Indexed: 11/30/2022]
Abstract
The physical intensity of a sound, usually expressed in dB on a logarithmic ratio scale, can easily be measured using technical equipment. Loudness is the perceptual correlate of sound intensity, and is usually determined by means of some sort of psychophysical scaling procedure. The interrelation of sound intensity and perceived loudness is still a matter of debate, and the physiological correlate of loudness perception in the human auditory pathway is not completely understood. Various studies indicate that the activation in human auditory cortex is more a representation of loudness sensation rather than of physical sound pressure level. This raises the questions (1), at what stage or stages in the ascending auditory pathway is the transformation of the physical stimulus into its perceptual correlate completed, and (2), to what extent other factors affecting individual loudness judgements might modulate the brain activation as registered by auditory neuroimaging. An overview is given about recent studies on the effects of sound intensity, duration, bandwidth and individual hearing status on the activation in the human auditory system, as measured by various approaches in auditory neuroimaging. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Stefan Uppenkamp
- Medizinische Physik, Carl von Ossietzky Universität, 26111 Oldenburg, Germany.
| | | |
Collapse
|
27
|
Altmann CF, Gaese BH. Representation of frequency-modulated sounds in the human brain. Hear Res 2013; 307:74-85. [PMID: 23933098 DOI: 10.1016/j.heares.2013.07.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2013] [Revised: 07/26/2013] [Accepted: 07/27/2013] [Indexed: 10/26/2022]
Abstract
Frequency-modulation is a ubiquitous sound feature present in communicative sounds of various animal species and humans. Functional imaging of the human auditory system has seen remarkable advances in the last two decades and studies pertaining to frequency-modulation have centered around two major questions: a) are there dedicated feature-detectors encoding frequency-modulation in the brain and b) is there concurrent representation with amplitude-modulation, another temporal sound feature? In this review, we first describe how these two questions are motivated by psychophysical studies and neurophysiology in animal models. We then review how human non-invasive neuroimaging studies have furthered our understanding of the representation of frequency-modulated sounds in the brain. Finally, we conclude with some suggestions on how human neuroimaging could be used in future studies to address currently still open questions on this fundamental sound feature. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan; Career-Path Promotion Unit for Young Life Scientists, Kyoto University, Kyoto 606-8501, Japan.
| | | |
Collapse
|
28
|
Angenstein N, Brechmann A. Left auditory cortex is involved in pairwise comparisons of the direction of frequency modulated tones. Front Neurosci 2013; 7:115. [PMID: 23847464 PMCID: PMC3705175 DOI: 10.3389/fnins.2013.00115] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2013] [Accepted: 06/18/2013] [Indexed: 11/13/2022] Open
Abstract
Evaluating series of complex sounds like those in speech and music requires sequential comparisons to extract task-relevant relations between subsequent sounds. With the present functional magnetic resonance imaging (fMRI) study, we investigated whether sequential comparison of a specific acoustic feature within pairs of tones leads to a change in lateralized processing in the auditory cortex (AC) of humans. For this we used the active categorization of the direction (up vs. down) of slow frequency modulated (FM) tones. Several studies suggest that this task is mainly processed in the right AC. These studies, however, tested only the categorization of the FM direction of each individual tone. In the present study we ask the question whether the right lateralized processing changes when, in addition, the FM direction is compared within pairs of successive tones. For this we use an experimental approach involving contralateral noise presentation in order to explore the contributions made by the left and right AC in the completion of the auditory task. This method has already been applied to confirm the right-lateralized processing of the FM direction of individual tones. In the present study, the subjects were required to perform, in addition, a sequential comparison of the FM direction in pairs of tones. The results suggest a division of labor between the two hemispheres such that the FM direction of each individual tone is mainly processed in the right AC whereas the sequential comparison of this feature between tones in a pair is probably performed in the left AC.
Collapse
Affiliation(s)
- Nicole Angenstein
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | | |
Collapse
|
29
|
Langers DRM. Assessment of tonotopically organised subdivisions in human auditory cortex using volumetric and surface-based cortical alignments. Hum Brain Mapp 2013; 35:1544-61. [PMID: 23633425 PMCID: PMC6868999 DOI: 10.1002/hbm.22272] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2012] [Revised: 12/06/2012] [Accepted: 01/17/2013] [Indexed: 11/13/2022] Open
Abstract
Although orderly representations of sound frequency in the brain play a guiding role in the investigation of auditory processing, a rigorous statistical evaluation of cortical tonotopic maps has so far hardly been attempted. In this report, the group‐level significance of local tonotopic gradients was assessed using mass‐multivariate statistics. The existence of multiple fields on the superior surface of the temporal lobe in both hemispheres was shown. These fields were distinguishable on the basis of tonotopic gradient direction and may likely be identified with the human homologues of the core areas AI and R in primates. Moreover, an objective comparison was made between the usage of volumetric and surface‐based registration methods. Although the surface‐based method resulted in a better registration across subjects of the grey matter segment as a whole, the alignment of functional subdivisions within the cortical sheet did not appear to improve over volumetric methods. This suggests that the variable relationship between the structural and the functional characteristics of auditory cortex is a limiting factor that cannot be overcome by morphology‐based registration techniques alone. Finally, to illustrate how the proposed approach may be used in clinical practice, the method was used to test for focal differences regarding the tonotopic arrangements in healthy controls and tinnitus patients. No significant differences were observed, suggesting that tinnitus does not necessarily require tonotopic reorganisation to occur. Hum Brain Mapp 35:1544–1561, 2014. © 2013 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Dave R M Langers
- National Institute for Health Research Nottingham Hearing Biomedical Research Unit, School of Clinical Sciences, University of Nottingham, Queen's Medical Centre, Nottingham, United Kingdom; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
30
|
Oh J, Kwon JH, Yang PS, Jeong J. Auditory Imagery Modulates Frequency-specific Areas in the Human Auditory Cortex. J Cogn Neurosci 2013; 25:175-87. [DOI: 10.1162/jocn_a_00280] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.
Collapse
Affiliation(s)
| | | | - Po Song Yang
- 1The Catholic University of Korea
- 3Daejeon St. Mary's Hospital
| | | |
Collapse
|
31
|
Perspective of functional magnetic resonance imaging in middle ear research. Hear Res 2013; 301:183-92. [PMID: 23291496 DOI: 10.1016/j.heares.2012.12.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2012] [Revised: 11/26/2012] [Accepted: 12/19/2012] [Indexed: 11/20/2022]
Abstract
Functional magnetic resonance imaging (MRI) studies have frequently been applied to study sensory system such as vision, language, and cognition, but have proceeded at a considerably slower speed in investigating middle ear and central auditory processing. This is due to several factors, including the intrinsic anatomy of the middle ear system and inherent acoustic noise during acquisition of MRI data. However, accumulating evidences have demonstrated that clarification of some fundamental neural underpinnings of audition associated with middle ear mechanics can be achieved using functional MRI methods. This mini review attempted to take a narrow snapshot of the currently available functional MRI procedures and gave examples of what may be learned about hearing from their application. It is hoped that with these technical advancements, many new high impact applications in audition would follow. In particular, because the fMRI can be used in humans and in animals, fMRI may represent a unique tool that should promote translational research by enabling parallel analyses of physiological and pathological processes in the human and animal auditory system. This article is part of a special issue entitled "MEMRO 2012".
Collapse
|
32
|
Functional magnetic resonance imaging of sound pressure level encoding in the rat central auditory system. Neuroimage 2012; 65:119-26. [PMID: 23041525 DOI: 10.1016/j.neuroimage.2012.09.069] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2012] [Revised: 09/27/2012] [Accepted: 09/28/2012] [Indexed: 01/23/2023] Open
Abstract
Intensity is an important physical property of a sound wave and is customarily reported as sound pressure level (SPL). Invasive techniques such as electrical recordings, which typically examine one brain region at a time, have been used to study neuronal encoding of SPL throughout the central auditory system. Non-invasive functional magnetic resonance imaging (fMRI) with large field of view can simultaneously examine multiple auditory structures. We applied fMRI to measure the hemodynamic responses in the rat brain during sound stimulation at seven SPLs over a 72 dB range. This study used a sparse temporal sampling paradigm to reduce the adverse effects of scanner noise. Hemodynamic responses were measured from the central nucleus of the inferior colliculus (CIC), external cortex of the inferior colliculus (ECIC), lateral lemniscus (LL), medial geniculate body (MGB), and auditory cortex (AC). BOLD signal changes generally increase significantly (p<0.001) with SPL and the dependence is monotonic in CIC, ECIC, and LL. The ECIC has higher BOLD signal change than CIC and LL at high SPLs. The difference between BOLD signal changes at high and low SPLs is less in the MGB and AC. This suggests that the SPL dependences of the LL and IC are different from those in the MGB and AC and the SPL dependence of the CIC is different from that of the ECIC. These observations are likely related to earlier observations that neurons with firing rates that increase monotonically with SPL are dominant in the CIC, ECIC, and LL while non-monotonic neurons are dominant in the MGB and AC. Further, the IC's SPL dependence measured in this study is very similar to that measured in our earlier study using the continuous imaging method. Therefore, sparse temporal sampling may not be a prerequisite in auditory fMRI studies of the IC.
Collapse
|
33
|
Dykstra AR, Koh CK, Braida LD, Tramo MJ. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex. PLoS One 2012; 7:e44602. [PMID: 22957087 PMCID: PMC3434164 DOI: 10.1371/journal.pone.0044602] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2011] [Accepted: 08/09/2012] [Indexed: 12/04/2022] Open
Abstract
It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Program in Speech and Hearing Biosciences and Technology, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, United States of America.
| | | | | | | |
Collapse
|
34
|
Potes C, Gunduz A, Brunner P, Schalk G. Dynamics of electrocorticographic (ECoG) activity in human temporal and frontal cortical areas during music listening. Neuroimage 2012; 61:841-8. [PMID: 22537600 PMCID: PMC3376242 DOI: 10.1016/j.neuroimage.2012.04.022] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2011] [Revised: 03/06/2012] [Accepted: 04/07/2012] [Indexed: 10/28/2022] Open
Abstract
Previous studies demonstrated that brain signals encode information about specific features of simple auditory stimuli or of general aspects of natural auditory stimuli. How brain signals represent the time course of specific features in natural auditory stimuli is not well understood. In this study, we show in eight human subjects that signals recorded from the surface of the brain (electrocorticography (ECoG)) encode information about the sound intensity of music. ECoG activity in the high gamma band recorded from the posterior part of the superior temporal gyrus as well as from an isolated area in the precentral gyrus was observed to be highly correlated with the sound intensity of music. These results not only confirm the role of auditory cortices in auditory processing but also point to an important role of premotor and motor cortices. They also encourage the use of ECoG activity to study more complex acoustic features of simple or natural auditory stimuli.
Collapse
Affiliation(s)
- Cristhian Potes
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA
- Department of Electrical and Computer Engineering, University of Texas at El Paso, TX, USA
| | - Aysegul Gunduz
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Peter Brunner
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA
- Department of Neurology, Albany Medical College, Albany, NY, USA
- Department of Computer Science, Graz University of Technology, Graz, Austria
| | - Gerwin Schalk
- BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY, USA
- Department of Electrical and Computer Engineering, University of Texas at El Paso, TX, USA
- Department of Neurology, Albany Medical College, Albany, NY, USA
- Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- Department of Biomedical Science, School of Public Health, State University of New York at Albany, Albany, NY, USA
| |
Collapse
|
35
|
Giordano BL, McAdams S, Zatorre RJ, Kriegeskorte N, Belin P. Abstract encoding of auditory objects in cortical activity patterns. ACTA ACUST UNITED AC 2012; 23:2025-37. [PMID: 22802575 DOI: 10.1093/cercor/bhs162] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The human brain is thought to process auditory objects along a hierarchical temporal "what" stream that progressively abstracts object information from the low-level structure (e.g., loudness) as processing proceeds along the middle-to-anterior direction. Empirical demonstrations of abstract object encoding, independent of low-level structure, have relied on speech stimuli, and non-speech studies of object-category encoding (e.g., human vocalizations) often lack a systematic assessment of low-level information (e.g., vocalizations are highly harmonic). It is currently unknown whether abstract encoding constitutes a general functional principle that operates for auditory objects other than speech. We combined multivariate analyses of functional imaging data with an accurate analysis of the low-level acoustical information to examine the abstract encoding of non-speech categories. We observed abstract encoding of the living and human-action sound categories in the fine-grained spatial distribution of activity in the middle-to-posterior temporal cortex (e.g., planum temporale). Abstract encoding of auditory objects appears to extend to non-speech biological sounds and to operate in regions other than the anterior temporal lobe. Neural processes for the abstract encoding of auditory objects might have facilitated the emergence of speech categories in our ancestors.
Collapse
Affiliation(s)
- Bruno L Giordano
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.
| | | | | | | | | |
Collapse
|
36
|
Interaction between bottom-up and top-down effects during the processing of pitch intervals in sequences of spoken and sung syllables. Neuroimage 2012; 61:715-22. [PMID: 22503936 DOI: 10.1016/j.neuroimage.2012.03.086] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2011] [Revised: 03/14/2012] [Accepted: 03/29/2012] [Indexed: 11/21/2022] Open
Abstract
The processing of pitch intervals may be differentially influenced when musical or speech stimuli carry the pitch information. Most insights into the neural basis of pitch interval processing come from studies on music perception. However, music, in contrast to speech, contains a stable set of pitch intervals. To converge the investigation of pitch interval processing in music and speech, we used sequences of the same spoken or sung syllables. The pitch of these syllables varied either by semitone steps like in music or by smaller intervals. Participants had to differentiate the sequences according to their different sizes of pitch intervals or to the direction of the last frequency step in the sequence. The results depended strongly on the specific task demands. Whereas the interval-size task itself recruited more regions in right lateralized fronto-parietal brain network, stronger activity on semitone than on non-semitone sequences was found in the left hemisphere (mainly in frontal cortex) during this task. These effects were also influenced by the speech mode (spoken or sung syllables). Our findings suggest that the processing of pitch intervals in sequences of syllables depends on an interaction between bottom-up (speech mode, pitch interval) and top-down effects (task).
Collapse
|
37
|
Disinhibited feedback as a cause of synesthesia: evidence from a functional connectivity study on auditory-visual synesthetes. Neuropsychologia 2012; 50:1471-7. [PMID: 22414594 DOI: 10.1016/j.neuropsychologia.2012.02.032] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2011] [Revised: 02/26/2012] [Accepted: 02/27/2012] [Indexed: 11/21/2022]
Abstract
In synesthesia, certain stimuli to one sensory modality lead to sensory perception in another unstimulated modality. In addition to other models, a two-stage model is discussed to explain this phenomenon, which combines two previously formulated hypotheses regarding synesthesia: direct cross-activation and hyperbinding. The direct cross-activation model postulates that direct connections between sensory-specific areas are responsible for co-activation and synesthetic perception. The hyperbinding hypothesis suggests that the inducing stimulus and the synesthetic sensation are coupled by a sensory nexus area, which may be located in the parietal cortex. This latter hypothesis is compatible with the disinhibited feedback model, which suggests unusual feedback from multimodal convergence areas as the cause of synesthesia. In this study, the relevance of these models was tested in a group (n=14) of auditory-visual synesthetes by performing a functional connectivity analysis on functional magnetic resonance imaging (fMRI) data. Different simple and complex sounds were used as stimuli, and functionally defined seed areas in the bilateral auditory cortex (AC) and the left inferior parietal cortex (IPC) were used for the connectivity calculations. We found no differences in the connectivity of the AC and the visual areas between synesthetes and controls. The main finding of the study was stronger connectivity of the left IPC with the left primary auditory and right primary visual cortex in the group of auditory-visual synesthetes. The results support the model of disinhibited feedback as a cause of synesthetic perception but do not suggest direct cross-activation.
Collapse
|
38
|
Neural coding of sound intensity and loudness in the human auditory system. J Assoc Res Otolaryngol 2012; 13:369-79. [PMID: 22354617 DOI: 10.1007/s10162-012-0315-6] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2010] [Accepted: 01/30/2012] [Indexed: 10/28/2022] Open
Abstract
Inter-individual differences in loudness sensation of 45 young normal-hearing participants were employed to investigate how and at what stage of the auditory pathway perceived loudness, the perceptual correlate of sound intensity, is transformed into neural activation. Loudness sensation was assessed by categorical loudness scaling, a psychoacoustical scaling procedure, whereas neural activation in the auditory cortex, inferior colliculi, and medial geniculate bodies was investigated with functional magnetic resonance imaging (fMRI). We observed an almost linear increase of perceived loudness and percent signal change from baseline (PSC) in all examined stages of the upper auditory pathway. Across individuals, the slope of the underlying growth function for perceived loudness was significantly correlated with the slope of the growth function for the PSC in the auditory cortex, but not in subcortical structures. In conclusion, the fMRI correlate of neural activity in the auditory cortex as measured by the blood oxygen level-dependent effect appears to be more a linear reflection of subjective loudness sensation rather than a display of physical sound pressure level, as measured using a sound-level meter.
Collapse
|
39
|
Langers DRM, de Kleine E, van Dijk P. Tinnitus does not require macroscopic tonotopic map reorganization. Front Syst Neurosci 2012; 6:2. [PMID: 22347171 PMCID: PMC3269775 DOI: 10.3389/fnsys.2012.00002] [Citation(s) in RCA: 112] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2011] [Accepted: 01/16/2012] [Indexed: 01/12/2023] Open
Abstract
The pathophysiology underlying tinnitus, a hearing disorder characterized by the chronic perception of phantom sound, has been related to aberrant plastic reorganization of the central auditory system. More specifically, tinnitus is thought to involve changes in the tonotopic representation of sound. In the present study we used high-resolution functional magnetic resonance imaging to determine tonotopic maps in the auditory cortex of 20 patients with tinnitus but otherwise near-normal hearing, and compared these to equivalent outcomes from 20 healthy controls with matched hearing thresholds. Using a dedicated experimental paradigm and data-driven analysis techniques, multiple tonotopic gradients could be robustly distinguished in both hemispheres, arranged in a pattern consistent with previous findings. Yet, maps were not found to significantly differ between the two groups in any way. In particular, we found no evidence for an overrepresentation of high sound frequencies, matching the tinnitus pitch. A significant difference in evoked response magnitude was found near the low-frequency tonotopic endpoint on the lateral extreme of left Heschl's gyrus. Our results suggest that macroscopic tonotopic reorganization in the auditory cortex is not required for the emergence of tinnitus, and is not typical for tinnitus that accompanies normal hearing to mild hearing loss.
Collapse
Affiliation(s)
- Dave R M Langers
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands
| | | | | |
Collapse
|
40
|
Razak KA. Mechanisms underlying intensity-dependent changes in cortical selectivity for frequency-modulated sweeps. J Neurophysiol 2012; 107:2202-11. [PMID: 22279192 DOI: 10.1152/jn.00922.2011] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Frequency-modulated (FM) sweeps are common components of species-specific vocalizations. The intensity of FM sweeps can cover a wide range in the natural environment, but whether intensity affects neural selectivity for FM sweeps is unclear. Bats, such as the pallid bat, which use FM sweeps for echolocation, are suited to address this issue, because the intensity of echoes will vary with target distance. In this study, FM sweep rate selectivity of pallid bat auditory cortex neurons was measured using downward sweeps at different intensities. Neurons became more selective for FM sweep rates present in the bat's echolocation calls as intensity increased. Increased selectivity resulted from stronger inhibition of responses to slower sweep rates. The timing and bandwidth of inhibition generated by frequencies on the high side of the excitatory tuning curve [sideband high-frequency inhibition (HFI)] shape rate selectivity in cortical neurons in the pallid bat. To determine whether intensity-dependent changes in FM rate selectivity were due to altered inhibition, the timing and bandwidth of HFI were quantified at multiple intensities using the two-tone inhibition paradigm. HFI arrived faster relative to excitation as sound intensity increased. The bandwidth of HFI also increased with intensity. The changes in HFI predicted intensity-dependent changes in FM rate selectivity. These data suggest that neural selectivity for a sweep parameter is not static but shifts with intensity due to changes in properties of sideband inhibition.
Collapse
Affiliation(s)
- K A Razak
- Dept. of Psychology, Graduate Neuroscience Program, Univ. of California, Riverside, CA 92521, USA.
| |
Collapse
|
41
|
Liem F, Lutz K, Luechinger R, Jäncke L, Meyer M. Reducing the interval between volume acquisitions improves "sparse" scanning protocols in event-related auditory fMRI. Brain Topogr 2011; 25:182-93. [PMID: 22015572 DOI: 10.1007/s10548-011-0206-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2011] [Accepted: 10/07/2011] [Indexed: 10/16/2022]
Abstract
Sparse and clustered-sparse temporal sampling fMRI protocols have been devised to reduce the influence of auditory scanner noise in the context of auditory fMRI studies. Here, we report an improvement of the previously established clustered-sparse acquisition scheme. The standard procedure currently used by many researchers in the field is a scanning protocol that includes relatively long silent pauses between image acquisitions (and therefore, a relatively long repetition time or cluster-onset asynchrony); it is during these pauses that stimuli are presented. This approach makes it unlikely that stimulus-induced BOLD response is obscured by scanner-noise-induced BOLD response. It also allows the BOLD response to drop near baseline; thus, avoiding saturation of BOLD signal and theoretically increasing effect size. A possible drawback of this approach is the limited number of stimulus presentations and image acquisitions that are possible in a given period of time, which could result in an inaccurate estimation of effect size (higher standard error). Since this line of reasoning has not yet been empirically tested, we decided to vary the cluster-onset asynchrony (7.5, 10, 12.5, and 15 s) in the context of a clustered-sparse protocol. In this study sixteen healthy participants listened to spoken sentences. We performed whole-brain fMRI group statistics and region of interest analysis with anatomically defined regions of interest (auditory core and association areas). We discovered that the protocol, which included a short cluster-onset asynchrony (7.5 s), yielded more advantageous results than the other protocols, which involved longer cluster-onset asynchrony. The short cluster-onset asynchrony protocol exhibited a larger number of activated voxels and larger mean effect sizes with lower standard errors. Our findings suggest that, contrary to prior experience, a short cluster-onset asynchrony is advantageous because more stimuli can be delivered within any given period of time. Alternatively, a given number of stimuli can be presented in less time, and this broadens the spectrum of possible fMRI applications.
Collapse
Affiliation(s)
- Franziskus Liem
- Division Neuropsychology, Institute of Psychology, University of Zurich, Switzerland.
| | | | | | | | | |
Collapse
|
42
|
Schaefer RS, Farquhar J, Blokland Y, Sadakata M, Desain P. Name that tune: Decoding music from the listening brain. Neuroimage 2011; 56:843-9. [PMID: 20541612 DOI: 10.1016/j.neuroimage.2010.05.084] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2009] [Revised: 02/28/2010] [Accepted: 05/31/2010] [Indexed: 10/19/2022] Open
|
43
|
Blackman GA, Hall DA. Reducing the effects of background noise during auditory functional magnetic resonance imaging of speech processing: qualitative and quantitative comparisons between two image acquisition schemes and noise cancellation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:693-704. [PMID: 20844253 DOI: 10.1044/1092-4388(2010/10-0143)] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. METHOD Speech and narrowband noise were presented at a low sound level to 8 listeners during fMRI using 2 common scanning protocols: short ("continuous") and long ("sparse") temporal schemes. Three outcome measures were acquired simultaneously during fMRI: ratings of listening quality, discrimination performance, and brain activity. RESULTS Subjective ratings and discrimination performance were significantly improved by ANC and sparse acquisition. Sparse acquisition was the more robust method for detecting auditory cortical activity. ANC reduced some of the "extra-auditory" activity that might be associated with the effort required for perceptual discrimination in a noisy environment and also offered small improvements for detecting activity within Heschl's gyrus and planum polare. CONCLUSIONS For the scanning protocols evaluated here, the sparse temporal scheme was the more preferable for detecting sound-evoked activity. In addition, ANC ensures that listening difficulty is determined more by the chosen stimulus parameters and less by the adverse testing environment.
Collapse
|
44
|
Paltoglou AE, Sumner CJ, Hall DA. Mapping feature-sensitivity and attentional modulation in human auditory cortex with functional magnetic resonance imaging. Eur J Neurosci 2011; 33:1733-41. [PMID: 21447093 PMCID: PMC3110306 DOI: 10.1111/j.1460-9568.2011.07656.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast.
Collapse
|
45
|
Tschacher W, Schildt M, Sander K. Brain connectivity in listening to affective stimuli: a functional magnetic resonance imaging (fMRI) study and implications for psychotherapy. Psychother Res 2011; 20:576-88. [PMID: 20845228 DOI: 10.1080/10503307.2010.493538] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
To investigate the functional connectivity among amygdala, insula, and auditory cortex during affective auditory stimulation and its relevance for psychotherapy, the authors recorded, using functional magnetic resonance imaging (fMRI), the blood oxygenation level-dependent (BOLD) responses of these brain regions in 20 healthy adults while listening to affective sounds (laughing and crying). Their connectivity was analyzed by time-series panel analysis. The authors found significant positive associations among brain regions, with time-lagged associations generally directed from the right to the left hemisphere. Associations between amygdalar and cortical regions, however, were negative; specifically, activations of the left auditory cortex preceded decreases of the right amygdala. This suggested that affect regulation using cognitive control may have been achieved through active inhibition of amygdalar structures by the cortex. The authors discuss the implications of the findings for the change mechanisms inherent in psychotherapy.
Collapse
Affiliation(s)
- Wolfgang Tschacher
- University Hospital of Psychiatry, University of Bern, Bern, Switzerland.
| | | | | |
Collapse
|
46
|
Irwin A, Hall DA, Peters A, Plack CJ. Listening to urban soundscapes: Physiological validity of perceptual dimensions. Psychophysiology 2011; 48:258-68. [DOI: 10.1111/j.1469-8986.2010.01051.x] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
47
|
Woods DL, Herron TJ, Cate AD, Yund EW, Stecker GC, Rinne T, Kang X. Functional properties of human auditory cortical fields. Front Syst Neurosci 2010; 4:155. [PMID: 21160558 PMCID: PMC3001989 DOI: 10.3389/fnsys.2010.00155] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2010] [Accepted: 11/05/2010] [Indexed: 11/23/2022] Open
Abstract
While auditory cortex in non-human primates has been subdivided into multiple functionally specialized auditory cortical fields (ACFs), the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and non-attended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to non-attended sounds. Three centrally located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, VANCHCS Martinez, CA, USA
| | | | | | | | | | | | | |
Collapse
|
48
|
Deike S, Scheich H, Brechmann A. Active stream segregation specifically involves the left human auditory cortex. Hear Res 2010; 265:30-7. [PMID: 20233603 DOI: 10.1016/j.heares.2010.03.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2009] [Revised: 02/15/2010] [Accepted: 03/11/2010] [Indexed: 11/27/2022]
Abstract
An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.
Collapse
Affiliation(s)
- Susann Deike
- Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany.
| | | | | |
Collapse
|
49
|
Mayhew SD, Dirckx SG, Niazy RK, Iannetti GD, Wise RG. EEG signatures of auditory activity correlate with simultaneously recorded fMRI responses in humans. Neuroimage 2010; 49:849-64. [PMID: 19591945 DOI: 10.1016/j.neuroimage.2009.06.080] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2008] [Revised: 05/28/2009] [Accepted: 06/03/2009] [Indexed: 01/21/2023] Open
Affiliation(s)
- Stephen D Mayhew
- Centre for Functional Magnetic Resonance Imaging of the Brain, Department of Clinical Neurology, John Radcliffe Hospital, Headington, Oxford, UK.
| | | | | | | | | |
Collapse
|
50
|
Abstract
PURPOSE OF REVIEW This review summarizes recent advances in functional magnetic resonance imaging that reveal similarities in the organization of human auditory cortex (HAC) and auditory cortex of nonhuman primates. RECENT FINDINGS Functional magnetic resonance imaging studies have shown that HAC is a compact region that covers less than 8% of the total cortical surface. HAC is subdivided into more than a dozen distinct auditory cortical fields (ACFs) that surround Heschl's gyri on the superior temporal plane. Recent advances that permit the visualization of the results of functional magnetic imaging experiments directly on the cortical surface have provided new insights into the organization of human ACFs. Evidence suggests that medial regions of HAC are organized in a manner similar to the auditory cortex of other primate species with a set of tonotopically organized core ACFs surrounded by belt ACFs that often share tonotopic organization with the core. Although influenced by attention, responses in HAC core and belt fields are largely determined by the acoustic properties of stimuli, including their frequency, intensity, and location. In contrast, lateral regions of HAC contain parabelt fields that are little influenced by simple acoustic features but rather respond to behaviorally relevant complex sounds such as speech and are strongly modulated by attention. SUMMARY HAC conserves the basic structural and functional organization of auditory cortex as seen in old world primate species. A central challenge to future research is to understand how this basic primate plan has evolved to support uniquely human abilities such as music and language.
Collapse
|