1
|
Wong E, Radziwon K, Chen GD, Liu X, Manno FA, Manno SH, Auerbach B, Wu EX, Salvi R, Lau C. Functional magnetic resonance imaging of enhanced central auditory gain and electrophysiological correlates in a behavioral model of hyperacusis. Hear Res 2020; 389:107908. [PMID: 32062293 DOI: 10.1016/j.heares.2020.107908] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 12/02/2019] [Accepted: 02/02/2020] [Indexed: 01/24/2023]
Abstract
Hyperacusis is a debilitating hearing condition in which normal everyday sounds are perceived as exceedingly loud, annoying, aversive or even painful. The prevalence of hyperacusis approaches 10%, making it an important, but understudied medical condition. To noninvasively identify the neural correlates of hyperacusis in an animal model, we used sound-evoked functional magnetic resonance imaging (fMRI) to locate regions of abnormal activity in the central nervous system of rats with behavioral evidence of hyperacusis induced with an ototoxic drug (sodium salicylate, 250 mg/kg, i.p.). Reaction time-intensity measures of loudness-growth revealed behavioral evidence of salicylate-induced hyperacusis at high intensities. fMRI revealed significantly enhanced sound-evoked responses in the auditory cortex (AC) to 80 dB SPL tone bursts presented at 8 and 16 kHz. Sound-evoked responses in the inferior colliculus (IC) were also enhanced, but to a lesser extent. To confirm the main results, electrophysiological recordings of spike discharges from multi-unit clusters were obtained from the central auditory pathway. Salicylate significantly enhanced tone-evoked spike-discharges from multi-unit clusters in the AC from 4 to 30 kHz at intensities ≥60 dB SPL; less enhancement occurred in the medial geniculate body (MGB), and even less in the IC. Our results demonstrate for the first time that non-invasive sound-evoked fMRI can be used to identify regions of neural hyperactivity throughout the brain in an animal model of hyperacusis.
Collapse
Affiliation(s)
- Eddie Wong
- Department of Physics, City University of Hong Kong, Hong Kong, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China; Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Hong Kong, China
| | - Kelly Radziwon
- Center for Hearing & Deafness, Department of Communicative Disorders and Sciences, SUNY at Buffalo, 137 Cary Hall, Buffalo, NY, 14214, USA
| | - Guang-Di Chen
- Center for Hearing & Deafness, Department of Communicative Disorders and Sciences, SUNY at Buffalo, 137 Cary Hall, Buffalo, NY, 14214, USA
| | - Xiaopeng Liu
- Center for Hearing & Deafness, Department of Communicative Disorders and Sciences, SUNY at Buffalo, 137 Cary Hall, Buffalo, NY, 14214, USA
| | - Francis Am Manno
- Department of Physics, City University of Hong Kong, Hong Kong, China; School of Biomedical Engineering, University of Sydney, Sydney, New South Wales, Australia
| | - Sinai Hc Manno
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
| | - Benjamin Auerbach
- Center for Hearing & Deafness, Department of Communicative Disorders and Sciences, SUNY at Buffalo, 137 Cary Hall, Buffalo, NY, 14214, USA
| | - Ed X Wu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China; Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Hong Kong, China
| | - Richard Salvi
- Center for Hearing & Deafness, Department of Communicative Disorders and Sciences, SUNY at Buffalo, 137 Cary Hall, Buffalo, NY, 14214, USA; Department of Audiology and Speech-Language Pathology, Asia University, Taichung, Taiwan, ROC.
| | - Condon Lau
- Department of Physics, City University of Hong Kong, Hong Kong, China.
| |
Collapse
|
2
|
Perrachione TK, Ghosh SS, Ostrovskaya I, Gabrieli JDE, Kovelman I. Phonological Working Memory for Words and Nonwords in Cerebral Cortex. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1959-1979. [PMID: 28631005 PMCID: PMC5831089 DOI: 10.1044/2017_jslhr-l-15-0446] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2015] [Revised: 05/25/2016] [Accepted: 10/27/2016] [Indexed: 05/27/2023]
Abstract
PURPOSE The primary purpose of this study was to identify the brain bases of phonological working memory (the short-term maintenance of speech sounds) using behavioral tasks analogous to clinically sensitive assessments of nonword repetition. The secondary purpose of the study was to identify how individual differences in brain activation were related to participants' nonword repetition abilities. METHOD We used functional magnetic resonance imaging to measure neurophysiological response during a nonword discrimination task derived from standard clinical assessments of phonological working memory. Healthy adult control participants (N = 16) discriminated pairs of real words or nonwords under varying phonological working memory load, which we manipulated by parametrically varying the number of syllables in target (non)words. Participants' cognitive and phonological abilities were also measured using standardized assessments. RESULTS Neurophysiological responses in bilateral superior temporal gyrus, inferior frontal gyrus, and supplementary motor area increased with greater phonological working memory load. Activation in left superior temporal gyrus during nonword discrimination correlated with participants' performance on standard clinical nonword repetition tests. CONCLUSION These results suggest that phonological working memory is related to the function of cortical structures that canonically underlie speech perception and production.
Collapse
Affiliation(s)
| | - Satrajit S. Ghosh
- Massachusetts Institute of Technology, Cambridge
- Harvard Medical School, Boston, MA
| | - Irina Ostrovskaya
- Massachusetts Institute of Technology, Cambridge
- Harvard Medical School, Boston, MA
| | - John D. E. Gabrieli
- Massachusetts Institute of Technology, Cambridge
- Harvard Medical School, Boston, MA
| | - Ioulia Kovelman
- Massachusetts Institute of Technology, Cambridge
- University of Michigan, Ann Arbor
| |
Collapse
|
3
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
4
|
Meyer GF, Spray A, Fairlie JE, Uomini NT. Inferring common cognitive mechanisms from brain blood-flow lateralization data: a new methodology for fTCD analysis. Front Psychol 2014; 5:552. [PMID: 24982641 PMCID: PMC4059176 DOI: 10.3389/fpsyg.2014.00552] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2014] [Accepted: 05/19/2014] [Indexed: 11/30/2022] Open
Abstract
Current neuroimaging techniques with high spatial resolution constrain participant motion so that many natural tasks cannot be carried out. The aim of this paper is to show how a time-locked correlation-analysis of cerebral blood flow velocity (CBFV) lateralization data, obtained with functional TransCranial Doppler (fTCD) ultrasound, can be used to infer cerebral activation patterns across tasks. In a first experiment we demonstrate that the proposed analysis method results in data that are comparable with the standard Lateralization Index (LI) for within-task comparisons of CBFV patterns, recorded during cued word generation (CWG) at two difficulty levels. In the main experiment we demonstrate that the proposed analysis method shows correlated blood-flow patterns for two different cognitive tasks that are known to draw on common brain areas, CWG, and Music Synthesis. We show that CBFV patterns for Music and CWG are correlated only for participants with prior musical training. CBFV patterns for tasks that draw on distinct brain areas, the Tower of London and CWG, are not correlated. The proposed methodology extends conventional fTCD analysis by including temporal information in the analysis of cerebral blood-flow patterns to provide a robust, non-invasive method to infer whether common brain areas are used in different cognitive tasks. It complements conventional high resolution imaging techniques.
Collapse
Affiliation(s)
- Georg F Meyer
- Department of Psychological Sciences, University of Liverpool Liverpool, UK
| | - Amy Spray
- School of Psychology, University of Liverpool Liverpool, UK
| | - Jo E Fairlie
- Department of Archaeology, Classics and Egyptology, University of Liverpool Liverpool, UK
| | - Natalie T Uomini
- Department of Archaeology, Classics and Egyptology, University of Liverpool Liverpool, UK
| |
Collapse
|
5
|
The encoding of vowels and temporal speech cues in the auditory cortex of professional musicians: An EEG study. Neuropsychologia 2013; 51:1608-18. [DOI: 10.1016/j.neuropsychologia.2013.04.007] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Revised: 03/14/2013] [Accepted: 04/18/2013] [Indexed: 11/16/2022]
|
6
|
Functional localization of the auditory thalamus in individual human subjects. Neuroimage 2013; 78:295-304. [PMID: 23603350 DOI: 10.1016/j.neuroimage.2013.04.035] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2012] [Revised: 03/20/2013] [Accepted: 04/08/2013] [Indexed: 01/14/2023] Open
Abstract
Here we describe an easily implemented protocol based on sparse MR acquisition and a scrambled 'music' auditory stimulus that allows for reliable measurement of functional activity within the medial geniculate body (MGB, the primary auditory thalamic nucleus) in individual subjects. We find that our method is equally accurate and reliable as previously developed structural methods, and offers significantly more accuracy in identifying the MGB than group based methods. We also find that lateralization and binaural summation within the MGB resemble those found in the auditory cortex.
Collapse
|
7
|
Zhang L, Shu H, Zhou F, Wang X, Li P. Common and distinct neural substrates for the perception of speech rhythm and intonation. Hum Brain Mapp 2010; 31:1106-16. [PMID: 20063360 DOI: 10.1002/hbm.20922] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The present study examines the neural substrates for the perception of speech rhythm and intonation. Subjects listened passively to synthesized speech stimuli that contained no semantic and phonological information, in three conditions: (1) continuous speech stimuli with fixed syllable duration and fundamental frequency in the standard condition, (2) stimuli with varying vocalic durations of syllables in the speech rhythm condition, and (3) stimuli with varying fundamental frequency in the intonation condition. Compared to the standard condition, speech rhythm activated the right middle superior temporal gyrus (mSTG), whereas intonation activated the bilateral superior temporal gyrus and sulcus (STG/STS) and the right posterior STS. Conjunction analysis further revealed that rhythm and intonation activated a common area in the right mSTG but compared to speech rhythm, intonation elicited additional activations in the right anterior STS. Findings from the current study reveal that the right mSTG plays an important role in prosodic processing. Implications of our findings are discussed with respect to neurocognitive theories of auditory processing.
Collapse
Affiliation(s)
- Linjun Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | | | | | | | | |
Collapse
|
8
|
Altmann CF, Júnior CGDO, Heinemann L, Kaiser J. Processing of spectral and amplitude envelope of animal vocalizations in the human auditory cortex. Neuropsychologia 2010; 48:2824-32. [DOI: 10.1016/j.neuropsychologia.2010.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2009] [Revised: 05/09/2010] [Accepted: 05/12/2010] [Indexed: 11/28/2022]
|
9
|
Nahum M, Renvall H, Ahissar M. Dynamics of cortical responses to tone pairs in relation to task difficulty: a MEG study. Hum Brain Mapp 2009; 30:1592-604. [PMID: 18711706 DOI: 10.1002/hbm.20629] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
We investigated the effect of task difficulty on the dynamics of auditory cortical responses. Whole-scalp magnetoencephalographic (MEG) signals were recorded while subjects performed a same/different frequency discrimination task on equiprobable tone pairs applied in blocks of five, which were separated by a 10 s intertrial interval. Task difficulty was manipulated by the interpair frequency difference. The manipulation of task difficulty affected the amplitude of the N100m response to the first tone and the latency of the N100m response to the second tone in each pair. The N100m responses were smaller and peaked significantly later in the difficult than in the easy condition. The later processing field (PF) responses were longer in duration in the difficult condition. In both conditions, the duration of the PF response was negatively correlated with the subject's performance in the task, and was longer in the less successful subjects. The PF response may thus reflect the subjects' effort to resolve the task. The N100m and the PF responses did not differ between the tone pairs along the five-pair trial as a function of task difficulty, suggesting that changes in response along the five-pair trial are not easily affected by high-level manipulations.
Collapse
Affiliation(s)
- Mor Nahum
- Interdisciplinary Center for Neural Computation, Hebrew University, Mt. Scopus, Jerusalem, Israel.
| | | | | |
Collapse
|
10
|
Left hemisphere specialization for duration discrimination of musical and speech sounds. Neuropsychologia 2008; 46:2013-9. [DOI: 10.1016/j.neuropsychologia.2008.01.019] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2008] [Accepted: 01/22/2008] [Indexed: 11/17/2022]
|
11
|
Hunter MD, Lee KH, Tandon P, Parks RW, Wilkinson ID, Woodruff PWR. Lateral response dynamics and hemispheric dominance for speech perception. Neuroreport 2007; 18:1295-9. [PMID: 17632286 DOI: 10.1097/wnr.0b013e32827420e4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
In this study, we investigated the mechanism for the left cerebral hemisphere's dominance for speech perception. We utilized the crossover of auditory pathways in the central nervous system to present speech stimuli more directly to the left hemisphere (via the right ear) and right hemisphere (via the left ear). Using functional MRI, we found that estimated duration of neural response in the left auditory cortex increased as more speech information was directly received from the right ear. Conversely, response duration in the right auditory cortex was not modulated when more speech information was directly received from the left ear. These data suggest that selective temporal responding distinguishes the dominant from nondominant hemisphere of the human brain during speech perception.
Collapse
Affiliation(s)
- Michael D Hunter
- Sheffield Cognition and Neuroimaging Laboratory, Academic Clinical Psychiatry, University of Sheffield, UK.
| | | | | | | | | | | |
Collapse
|
12
|
Dick F, Saygin AP, Galati G, Pitzalis S, Bentrovato S, D'Amico S, Wilson S, Bates E, Pizzamiglio L. What is Involved and What is Necessary for Complex Linguistic and Nonlinguistic Auditory Processing: Evidence from Functional Magnetic Resonance Imaging and Lesion Data. J Cogn Neurosci 2007; 19:799-816. [PMID: 17488205 DOI: 10.1162/jocn.2007.19.5.799] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
We used functional magnetic resonance imaging (fMRI) in conjunction with a voxel-based approach to lesion symptom mapping to quantitatively evaluate the similarities and differences between brain areas involved in language and environmental sound comprehension. In general, we found that language and environmental sounds recruit highly overlapping cortical regions, with cross-domain differences being graded rather than absolute. Within language-based regions of interest, we found that in the left hemisphere, language and environmental sound stimuli evoked very similar volumes of activation, whereas in the right hemisphere, there was greater activation for environmental sound stimuli. Finally, lesion symptom maps of aphasic patients based on environmental sounds or linguistic deficits [Saygin, A. P., Dick, F., Wilson, S. W., Dronkers, N. F., & Bates, E. Shared neural resources for processing language and environmental sounds: Evidence from aphasia. Brain, 126, 928–945, 2003] were generally predictive of the extent of blood oxygenation level dependent fMRI activation across these regions for sounds and linguistic stimuli in young healthy subjects.
Collapse
|
13
|
Lehmann C, Herdener M, Schneider P, Federspiel A, Bach DR, Esposito F, di Salle F, Scheffler K, Kretz R, Dierks T, Seifritz E. Dissociated lateralization of transient and sustained blood oxygen level-dependent signal components in human primary auditory cortex. Neuroimage 2006; 34:1637-42. [PMID: 17175176 DOI: 10.1016/j.neuroimage.2006.11.011] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2006] [Revised: 10/04/2006] [Accepted: 11/06/2006] [Indexed: 10/23/2022] Open
Abstract
Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.
Collapse
Affiliation(s)
- Christoph Lehmann
- University Hospital of Clinical Psychiatry, University of Bern, 3000 Bern, Switzerland.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
14
|
Voisin J, Bidet-Caulet A, Bertrand O, Fonlupt P. Listening in silence activates auditory areas: a functional magnetic resonance imaging study. J Neurosci 2006; 26:273-8. [PMID: 16399697 PMCID: PMC6674327 DOI: 10.1523/jneurosci.2967-05.2006] [Citation(s) in RCA: 118] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Directing attention to some acoustic features of a sound has been shown repeatedly to modulate the stimulus-induced neural responses. On the contrary, little is known about the neurophysiological impact of auditory attention when the auditory scene remains empty. We performed an experiment in which subjects had to detect a sound emerging from silence (the sound was detectable after different durations of silence). Two frontal activations (right dorsolateral prefrontal and inferior frontal) were found, regardless of the side where sound was searched for, consistent with the well established role of these regions in attentional control. The main result was that the superior temporal cortex showed activations contralateral to the side where sound was expected to be present. The area extended from the vicinity of Heschl's gyrus to the surrounding areas (planum temporale/anterior lateral areas). The effect consisted of both an increase in the response to a sound delivered after attention was directed to detect its emergence and a baseline shift during the silent period. Thus, in absence of any acoustic stimulus, the search for an auditory input was found to activate the auditory cortex.
Collapse
Affiliation(s)
- Julien Voisin
- Institut National de la Santé et de la Recherche Médicale, Unité 280, Institut Fédératif des Neurosciences de Lyon, Université Claude Bernard Lyon 1, F-69000, Lyon, France
| | | | | | | |
Collapse
|
15
|
Meyer M, Zysset S, von Cramon DY, Alter K. Distinct fMRI responses to laughter, speech, and sounds along the human peri-sylvian cortex. ACTA ACUST UNITED AC 2005; 24:291-306. [PMID: 15993767 DOI: 10.1016/j.cogbrainres.2005.02.008] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2004] [Revised: 02/04/2005] [Accepted: 02/07/2005] [Indexed: 10/25/2022]
Abstract
In this event-related fMRI study, 12 right-handed volunteers heard human laughter, sentential speech, and nonvocal sounds in which global temporal and harmonic information were varied whilst they were performing a simple auditory target detection. This study aimed to delineate distinct peri-auditory regions which preferentially respond to laughter, speech, and nonvocal sounds. Results show that all three types of stimuli evoked blood-oxygen-level-dependent responses along the left and right peri-sylvian cortex. However, we observed differences in regional strength and lateralization in that (i) hearing human laughter preferentially involves auditory and somatosensory fields primarily in the right hemisphere, (ii) hearing spoken sentences activates left anterior and posterior lateral temporal regions, (iii) hearing nonvocal sounds recruits bilateral areas in the medial portion of Heschl's gyrus and at the medial wall of the posterior Sylvian Fissure (planum parietale and parietal operculum). Generally, the data imply a differential regional sensitivity of peri-sylvian areas to different auditory stimuli with the left hemisphere responding more strongly to speech and with the right hemisphere being more amenable to nonspeech stimuli. Interestingly, passive perception of human laughter activates brain regions which control motor (larynx) functions. This observation may speak to the issue of a dense intertwining of expressive and receptive mechanisms in the auditory domain. Furthermore, the present study provides evidence for a functional role of inferior parietal areas in auditory processing. Finally, a post hoc conjunction analysis meant to reveal the neural substrates of human vocal timbre demonstrates a particular preference of left and right lateral parts of the superior temporal lobes for stimuli which are made up of human voices relative to nonvocal sounds.
Collapse
Affiliation(s)
- Martin Meyer
- Department of Neuropsychology, University of Zurich, Treichlerstrasse 10, CH-8032 Zurich, Switzerland.
| | | | | | | |
Collapse
|
16
|
Van Meir V, Boumans T, De Groof G, Van Audekerke J, Smolders A, Scheunders P, Sijbers J, Verhoye M, Balthazart J, Van der Linden A. Spatiotemporal properties of the BOLD response in the songbirds' auditory circuit during a variety of listening tasks. Neuroimage 2005; 25:1242-55. [PMID: 15850742 DOI: 10.1016/j.neuroimage.2004.12.058] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2004] [Revised: 11/19/2004] [Accepted: 12/05/2004] [Indexed: 11/23/2022] Open
Abstract
Auditory fMRI in humans has recently received increasing attention from cognitive neuroscientists as a tool to understand mental processing of learned acoustic sequences and analyzing speech recognition and development of musical skills. The present study introduces this tool in a well-documented animal model for vocal learning, the songbird, and provides fundamental insight in the main technical issues associated with auditory fMRI in these songbirds. Stimulation protocols with various listening tasks lead to appropriate activation of successive relays in the songbirds' auditory pathway. The elicited BOLD response is also region and stimulus specific, and its temporal aspects provide accurate measures of the changes in brain physiology induced by the acoustic stimuli. Extensive repetition of an identical stimulus does not lead to habituation of the response in the primary or secondary telencephalic auditory regions of anesthetized subjects. The BOLD signal intensity changes during a stimulation and subsequent rest period have a very specific time course which shows a remarkable resemblance to auditory evoked BOLD responses commonly observed in human subjects. This observation indicates that auditory fMRI in the songbird may establish a link between auditory related neuro-imaging studies done in humans and the large body of neuro-ethological research on song learning and neuro-plasticity performed in songbirds.
Collapse
Affiliation(s)
- Vincent Van Meir
- Bio-Imaging Laboratory, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp, Belgium.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
17
|
Langers DRM, Van Dijk P, Backes WH. Interactions between hemodynamic responses to scanner acoustic noise and auditory stimuli in functional magnetic resonance imaging. Magn Reson Med 2005; 53:49-60. [PMID: 15690502 DOI: 10.1002/mrm.20315] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In functional MRI experiments on the central auditory system, activation caused by acoustic scanner noise is a dominating factor that partially masks the hemodynamic response signals to sound stimuli of interest. In this study, the nonlinear interaction between auditory responses to single scans and those to tone stimuli was investigated. By using irregular acquisition repetition times and quasi-random stimulus timings, the brain responses to pure tone stimuli were analyzed, as well as their interaction with scanner noise. The tone frequencies were chosen to match either the fundamental frequency of the scanner noise (730 Hz) or a region with little spectral power (4.70 kHz). The hemodynamic responses could be characterized by amplitudes of 1.3% and a time-to-peak of 4.0-4.5 sec in the absence of scanner noise. Interaction effects due to a single previous scan typically decreased the response magnitudes to 0.9%. The functional shape of the interaction was analyzed and could be described by a highly separable, dominantly symmetric interaction function that fairly agreed with a low-order Volterra expansion of a simple nonlinear model. Interactions were stronger and more complex in shape when the spectral content of the tone stimulus and the scanner noise were more similar.
Collapse
Affiliation(s)
- Dave R M Langers
- Department of Radiology and Department of Otorhinolaryngology and Head & Neck Surgery, Maastricht University Hospital, P. O. Box 5800, 6202 AZ Maastricht, The Netherlands
| | | | | |
Collapse
|
18
|
Petkov CI, Kang X, Alho K, Bertrand O, Yund EW, Woods DL. Attentional modulation of human auditory cortex. Nat Neurosci 2004; 7:658-63. [PMID: 15156150 DOI: 10.1038/nn1256] [Citation(s) in RCA: 240] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2003] [Accepted: 03/29/2004] [Indexed: 11/08/2022]
Abstract
Attention powerfully influences auditory perception, but little is understood about the mechanisms whereby attention sharpens responses to unattended sounds. We used high-resolution surface mapping techniques (using functional magnetic resonance imaging, fMRI) to examine activity in human auditory cortex during an intermodal selective attention task. Stimulus-dependent activations (SDAs), evoked by unattended sounds during demanding visual tasks, were maximal over mesial auditory cortex. They were tuned to sound frequency and location, and showed rapid adaptation to repeated sounds. Attention-related modulations (ARMs) were isolated as response enhancements that occurred when subjects performed pitch-discrimination tasks. In contrast to SDAs, ARMs were localized to lateral auditory cortex, showed broad frequency and location tuning, and increased in amplitude with sound repetition. The results suggest a functional dichotomy of auditory cortical fields: stimulus-determined mesial fields that faithfully transmit acoustic information, and attentionally labile lateral fields that analyze acoustic features of behaviorally relevant sounds.
Collapse
Affiliation(s)
- Christopher I Petkov
- Center for Neuroscience, UC Davis, 1544 Newton Court, Davis, California 95616, USA
| | | | | | | | | | | |
Collapse
|
19
|
Seifritz E, Di Salle F, Esposito F, Bilecen D, Neuhoff JG, Scheffler K. Sustained blood oxygenation and volume response to repetition rate-modulated sound in human auditory cortex. Neuroimage 2003; 20:1365-70. [PMID: 14568505 DOI: 10.1016/s1053-8119(03)00421-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2003] [Revised: 07/01/2003] [Accepted: 07/11/2003] [Indexed: 11/21/2022] Open
Abstract
The blood oxygen level-dependent (BOLD) signal time course in the auditory cortex is characterized by two components, an initial transient peak and a subsequent sustained plateau with smaller amplitude. Because the T(2)(*) signal detected by functional magnetic resonance imaging (fMRI) depends on at least two counteracting factors, blood oxygenation and volume, we examined whether the reduction in the sustained BOLD signal results from decreased levels of oxygenation or from increased levels of blood volume. We used conventional fMRI to quantify the BOLD signal and fMRI in combination with superparamagnetic contrast agent to quantify blood volume and employed repetition rate-modulated sounds in a silent background to manipulate the response amplitude in the auditory cortex. In the BOLD signal, the initial peak reached 3.3% with pulsed sound and 1.9% with continuous sound, whereas the sustained BOLD signal fell to 2.2% with pulsed sound and to 0.5% with continuous sound, respectively. The repetition rate-dependent reduction in the sustained BOLD amplitude was accompanied by concordant changes in sustained blood volume levels, which, compared to silence, increased by approximately 30% with pulsed and by approximately 10% with continuous sound. Thus, our data suggest that the reduced amplitude of the sustained BOLD signal reflects stimulus-dependent modulation of blood oxygenation rather than blood volume-related effects.
Collapse
Affiliation(s)
- Erich Seifritz
- Department of Psychiatry, University of Basel, 4025, Basel, Switzerland.
| | | | | | | | | | | |
Collapse
|
20
|
Jäncke L, Specht K, Shah JN, Hugdahl K. Focused attention in a simple dichotic listening task: an fMRI experiment. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2003; 16:257-66. [PMID: 12668235 DOI: 10.1016/s0926-6410(02)00281-1] [Citation(s) in RCA: 80] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Whole-head functional magnetic resonance imaging (fMRI) was used in nine neurologically intact subjects to measure the hemodynamic responses in the context of dichotic listening (DL). In order to eliminate the influence of verbal information processing, tones of different frequencies were used as stimuli. Three different dichotic listening tasks were used: the subjects were instructed to either concentrate on the stimuli presented in both ears (DIV), or only in the left (FL) or right (FR) ear and to monitor the auditory input for a specific target tone. When the target tone was detected, the subjects were required to indicate this by pressing a response button. Compared to the resting state, all dichotic listening tasks evoked strong hemodynamic responses within a distributed network comprising of temporal, parietal, and frontal brain areas. Thus, it is clear that dichotic listening makes use of various cognitive functions located within the dorsal and ventral stream of auditory information processing (i.e., the 'what' and 'where' streams). Comparing the three different dichotic listening conditions with each other only revealed a significant difference in the pre-SMA and within the left planum temporale area. The pre-SMA was generally more strongly activated during the DIV condition than during the FR and FL conditions. Within the planum temporale, the strongest activation was found during the FR condition and the weakest during the DIV condition. These findings were taken as evidence that even a simple dichotic listening task such as the one used here, makes use of a distributed neural network comprising of the dorsal and ventral stream of auditory information processing. In addition, these results support the previously made assumption that planum temporale activation is modulated by attentional strategies. Finally, the present findings uncovered that the pre-SMA, which is mostly thought to be involved in higher-order motor control processes, is also involved in cognitive processes operative during dichotic listening.
Collapse
Affiliation(s)
- Lutz Jäncke
- Institute of Psychology, Division of Neuropsychology, University Zürich, Treichlerstr 10, CH-8032 Zürich, Switzerland.
| | | | | | | |
Collapse
|
21
|
Loose R, Kaufmann C, Auer DP, Lange KW. Human prefrontal and sensory cortical activity during divided attention tasks. Hum Brain Mapp 2003; 18:249-59. [PMID: 12632463 PMCID: PMC6871829 DOI: 10.1002/hbm.10082] [Citation(s) in RCA: 92] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
In our natural environment, the ability to divide attention is essential since we attend simultaneously to a number of sensory modalities, e.g., to visual and auditory stimuli. In this study, functional magnetic resonance imaging (fMRI) was used to study brain activation while a divided attention task was performed. Brain activation was also assessed under selective attention. Fourteen healthy male subjects aged between 19 and 28 years underwent fMRI studies using gradient EPI sequences. Cingulate activation was evident in all attention tasks. Focusing attention on one modality (visual or auditory) increased the activity in the corresponding primary and secondary sensory area. When attention is divided between both modalities, the activation in the sensory areas is decreased, possibly due to a limited capacity of the system for controlled processing. Left prefrontal activation, however, was evident selectively during the divided attention task. The present results suggest that this area may be important in the execution of controlled processing when attention is divided between two sources of information. These results support the view that the prefrontal cortex is involved in the central executive system and controls attention and information flow.
Collapse
Affiliation(s)
- Rainer Loose
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | | | | | - Klaus W. Lange
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| |
Collapse
|
22
|
Harms MP, Melcher JR. Sound repetition rate in the human auditory pathway: representations in the waveshape and amplitude of fMRI activation. J Neurophysiol 2002; 88:1433-50. [PMID: 12205164 DOI: 10.1152/jn.2002.88.3.1433] [Citation(s) in RCA: 148] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sound repetition rate plays an important role in stream segregation, temporal pattern recognition, and the perception of successive sounds as either distinct or fused. This study was aimed at elucidating the neural coding of repetition rate and its perceptual correlates. We investigated the representations of rate in the auditory pathway of human listeners using functional magnetic resonance imaging (fMRI), an indicator of population neural activity. Stimuli were trains of noise bursts presented at rates ranging from low (1-2/s; each burst is perceptually distinct) to high (35/s; individual bursts are not distinguishable). There was a systematic change in the form of fMRI response rate-dependencies from midbrain to thalamus to cortex. In the inferior colliculus, response amplitude increased with increasing rate while response waveshape remained unchanged and sustained. In the medial geniculate body, increasing rate produced an increase in amplitude and a moderate change in waveshape at higher rates (from sustained to one showing a moderate peak just after train onset). In auditory cortex (Heschl's gyrus and the superior temporal gyrus), amplitude changed somewhat with rate, but a far more striking change occurred in response waveshape-low rates elicited a sustained response, whereas high rates elicited an unusual phasic response that included prominent peaks just after train onset and offset. The shift in cortical response waveshape from sustained to phasic with increasing rate corresponds to a perceptual shift from individually resolved bursts to fused bursts forming a continuous (but modulated) percept. Thus at high rates, a train forms a single perceptual "event," the onset and offset of which are delimited by the on and off peaks of phasic cortical responses. While auditory cortex showed a clear, qualitative correlation between perception and response waveshape, the medial geniculate body showed less correlation (since there was less change in waveshape with rate), and the inferior colliculus showed no correlation at all. Overall, our results suggest a population neural representation of the beginning and the end of distinct perceptual events that is weak or absent in the inferior colliculus, begins to emerge in the medial geniculate body, and is robust in auditory cortex.
Collapse
Affiliation(s)
- Michael P Harms
- Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston 02114, USA.
| | | |
Collapse
|
23
|
Jäncke L, Wüstenberg T, Schulze K, Heinze HJ. Asymmetric hemodynamic responses of the human auditory cortex to monaural and binaural stimulation. Hear Res 2002; 170:166-78. [PMID: 12208550 DOI: 10.1016/s0378-5955(02)00488-4] [Citation(s) in RCA: 103] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Applying whole-head functional magnetic resonance imaging (fMRI) in 11 neurologically intact subjects, hemodynamic responses to mon- or binaurally presented auditory stimuli were measured. To expand on previous studies in this research area, we used tones and consonant-vowel (CV) syllables. In one group of subjects (n=6) the perceived loudness of the monaurally presented stimuli were adjusted so that they matched the loudness of the binaurally presented stimuli. In a second group (n=5) no loudness adjustment was performed, thus the monaural stimuli were perceived less loud ( approximately 10 dB) than the binaural stimuli. These extensions allowed us to test whether CV syllables and tones produce different contralaterality effects (stronger hemodynamic responses in the auditory cortex contralateral to the stimulated ear) and whether binaural stimulation results in stronger activations in the auditory areas than during both monaural stimulation conditions (binaural summation) independent of loudness influences. In summary, we obtained the following findings: (1) strong contralaterality effects during monaural acoustic stimulation in the posterior superior temporal gyrus (STG) comprising the planum temporale and the dorsal bank of the superior temporal sulcus to CV syllables and tones; (2) the hemodynamic responses to contralaterally presented stimuli (during the monaural conditions) were mostly stronger than those to binaurally presented CV syllables; (3) there was no interaction between stimulus type and the size of the contralaterality effect; (4) there was no indication of binaural summation, rather we found stronger hemodynamic responses to the sum of both monaural stimulations (right and left ear) than to binaural stimulation in all auditory areas; (5) there were generally stronger hemodynamic responses to CV syllables than to tones in the posterior STG, while the hemodynamic responses to tones were stronger in the anterior part of the STG (temporal pole); and finally (6) there was no general difference in terms of hemodynamic response in the auditory cortex between the two groups when receiving either loudness-matched or non-loudness-matched monaural stimulation. These findings are discussed in the context of the underlying neurophysiological mechanisms, the peculiarities of functional fMRI, and the direct access and callosal relay models of hemispheric lateralization.
Collapse
Affiliation(s)
- L Jäncke
- Institute of Psychology, Neuropsychology, University Zürich Zürichbergstr. 43, CH-8044, Zurich, Switzerland.
| | | | | | | |
Collapse
|
24
|
Abstract
Recent functional neuroimaging studies have emphasized the role of the different areas within the left superior temporal sulcus (STS) for the perception of various speech stimuli. We report here the results of three independent studies additionally demonstrating hemodynamic responses in the vicinity of the planum temporale (PT). In these studies we used consonant-vowel (CV) syllables, tones, white noise, and vowels as acoustic stimuli in the context of whole-head functional magnetic resonance imaging, applying a long TR to attenuate possible masking effects by the scanner noise. To summarize, we obtained the following results for the contrasts comparing hemodynamic responses obtained during the perception of CV syllables compared to tones or white noise: (i) stronger activation in the vicinity of the left PT with two distinct foci of activation, one in a lateral position and the other more medial in the vicinity of Heschl's sulcus; (ii) stronger activation in the vicinity of the right PT; and (iii) stronger bilateral activation within the mid-STS. Further contrasts revealed the following findings: (iv) stronger bilateral activation to CV syllables than to vowels in the medial PT, (v) stronger left-sided activation to CV syllables than to vowels in the mid-STS, and (vi) stronger activation to CV syllables with voiceless initial consonants than to CV syllables with voiced initial consonants in the left medial PT. The results are compatible with the hypothesis that the STS contains neurons specialized for speech perception. However, these results also emphasize the role of the PT in the analysis of phonetic features, namely the voice-onset-time. Yet this does not mean that the PT is solely specialized for phonetic analysis. We hypothesize rather that the PT contains neurons specialized for the analysis of rapidly changing cues as was suggested by P. Tallal et al. (1993, Ann. N. Y. Acad. Sci. 682: 27-47).
Collapse
Affiliation(s)
- L Jäncke
- Institute of Experimental and General Psychology, Otto-von-Guericke University Magdeburg, D-39106 Magdeburg, Germany.
| | | | | | | |
Collapse
|
25
|
Mazard A, Mazoyer B, Etard O, Tzourio-Mazoyer N, Kosslyn SM, Mellet E. Impact of fMRI acoustic noise on the functional anatomy of visual mental imagery. J Cogn Neurosci 2002; 14:172-86. [PMID: 11970784 DOI: 10.1162/089892902317236821] [Citation(s) in RCA: 66] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
One drawback of functional magnetic resonance imaging (fMRI) is that the subject must endure intense noise during testing. We examined the possible role of such noise on the activation of early visual cortex during visual mental imagery. We postulated that noise may require subjects to work harder to pay attention to the task, which in turn could alter the activation pattern found in a silent environment. To test this hypothesis, we used positron emission tomography (PET) to monitor regional Cerebral Blood Flow (rCBF) of six subjects while they performed an imagery task either in a silent environment or in an "fMRI-like" noisy environment. Both noisy and silent imagery conditions, as compared to their respective baselines, resulted in activation of a bilateral fronto-parietal network (related to spatial processing), a bilateral inferior temporal area (related to shape processing), and deactivation of anterior calcarine cortex. Among the visual areas, rCBF increased in the most posterior part of the calcarine cortex, but at level just below the statistical threshold. However, blood flow values in the calcarine cortex during the silent imagery condition (but not the noisy imagery condition) were strongly negatively correlated with accuracy; the more challenging subjects found the task, the more strongly the calcarine cortex was activated. The subjects made more errors in the noisy condition than in the silent condition, and a direct comparison of the two conditions revealed that noise resulted in an increase in rCBF in the anterior cingulate cortex (involved in performance monitoring) and in the Wernicke's area (required to encode the verbal cues used in the task). These results thus demonstrate a nonadditive effect of fMRI gradient noise, resulting in a slight but significant effect on both performance and the neural activation pattern.
Collapse
Affiliation(s)
- A Mazard
- CNRS UMR 6905, CEA, Université de Caen, Paris, France
| | | | | | | | | | | |
Collapse
|
26
|
Jäncke L, Gaab N, Wüstenberg T, Scheich H, Heinze HJ. Short-term functional plasticity in the human auditory cortex: an fMRI study. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2001; 12:479-85. [PMID: 11689309 DOI: 10.1016/s0926-6410(01)00092-1] [Citation(s) in RCA: 105] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Applying functional magnetic resonance imaging (fMRI) techniques, hemodynamic responses elicited by sequences of pure tones of 950 Hz (standard) and deviant tones of 952, 954, and 958 Hz were measured before and 1 week after subjects had been trained at frequency discrimination for five sessions (over 1 week) using an oddball procedure. The task of the subject was to detect deviants differing from the standard stimulus. Frequency discrimination improved during the training session for three subjects (performance gain: T+) but not for three other subjects (no performance gain: T-). Hemodynamic responses in the auditory cortex comprising the planum temporale, planum polare and sulcus temporalis superior significantly decreased during training only for the T+ group. These activation changes were strongest for those stimuli accompanied by the strongest performance gain (958 and 954 Hz). There was no difference with respect to the hemodynamic responses in the auditory cortex for the T- group and the control group (CO) who did not received any pitch discrimination training. The results suggest a plastic reorganization of the cortical representation for the trained frequencies which can be best explained on the basis of 'fast learning' theories.
Collapse
Affiliation(s)
- L Jäncke
- Institute of General Psychology, Otto-von-Guericke University, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
27
|
Jäncke L, Buchanan TW, Lutz K, Shah NJ. Focused and nonfocused attention in verbal and emotional dichotic listening: an FMRI study. BRAIN AND LANGUAGE 2001; 78:349-363. [PMID: 11703062 DOI: 10.1006/brln.2000.2476] [Citation(s) in RCA: 96] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Functional magnetic resonance imaging (fMRI) was used to identify cortical regions which are involved in two dichotic listening tasks. During one task the subjects were required to allocate attention to both ears and to detect a specific target word (phonetic task), while during a second task the subjects were required to detect a specific emotional tone (emotional task). During three attentional conditions of each task, the subjects were required to focus attention to the right (FR) or left ear (FL), while during a third condition subjects were required to allocate attention to both ears simultaneously. In 11 right-handed male subjects, these dichotic listening tasks evoked strong activations in a temporofrontal network involving auditory cortices located in the temporal lobe and prefrontal brain regions. Hemodynamic responses were measured in the following regions of interest: Heschl's gyrus (HG), the planum polare (PP), the planum temporale (PT), the anterior superior temporal sulcus (aSTS), the posterior superior temporal sulcus (pSTS), and the inferior frontal gyrus region (IFG) of both hemispheres. The following findings were obtained: (1) the degree of activation in HG and PP depends on the direction of attention. In particular it was found that selectively attending to right-ear input led to increased activity specifically in the left HG and PP and attention to left ear input increased right-sided activity in these structures; (2) hemodynamic responses in the PT, aSTS, pSTS, and IFG were not modulated by the different focused-attention conditions; (3) hemodynamic responses in HG and PP in the nonforced conditions were the sum activation of the forced conditions; (4) there was no general difference between the phonetic and emotion tasks in terms of hemodynamic responses; (5) hemodynamic responses in the PT and pSTS were strongly left-lateralized, reflecting the specialization of these brain regions for language processing. These findings are discussed in the context of current theories of hemispheric specialization.
Collapse
Affiliation(s)
- L Jäncke
- Institute of General Psychology, Otto-von-Guericke-University--Magdeburg, Universitätsplatz, D-39106 Magdeburg, Germany.
| | | | | | | |
Collapse
|
28
|
Abstract
fMRI of human auditory cortex response to sinusoidal tones of 200, 1000, and 3000 Hz was evaluated using block design and conventional and "silent" event-related designs. Conventional event-related fMRI revealed the timecourse of the BOLD response (approximately 5 sec to peak, approximately 4 sec full-width-half-max, and approximately 14 sec recovery to baseline). Both event-related, but not block, designs provided evidence for tonotopic organization in auditory cortex. Sources of low-frequency activation were more lateral and anterior than the sources of high-frequency activation (P < or = 0.05). In the block designs, repeated rapid stimulus presentation and the co-incidence of scanner noise preclude definition of tonotopic organization revealed in event-related approaches. Magn Reson Med 45:254-260, 2001.
Collapse
Affiliation(s)
- T H Le
- Department of Radiology, University of California School of Medicine, San Francisco, CA 94143, USA
| | | | | |
Collapse
|
29
|
Volkow ND, Wang GJ, Fowler JS, Rooney WD, Felder CA, Lee JH, Franceschi D, Maynard L, Schlyer DJ, Pan JW, Gatley SJ, Springer CS. Resting brain metabolic activity in a 4 tesla magnetic field. Magn Reson Med 2000; 44:701-5. [PMID: 11064404 DOI: 10.1002/1522-2594(200011)44:5<701::aid-mrm7>3.0.co;2-j] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
MRI is a major tool for mapping brain function; thus it is important to assess potential effects on brain neuronal activity attributable to the requisite static magnetic field. This study used positron emission tomography (PET) and (18)F-deoxyglucose ((18)FDG) to measure brain glucose metabolism (a measure of brain function) in 12 subjects while their heads were in a 4 T MRI field during the (18)FDG uptake period. The results were compared with those obtained when the subjects were in the earth's field (PET scanner), and when they were in a simulated MRI environment in the PET instrument that imitated the restricted visual field of the MRI experiment. Whole-brain metabolism, as well as metabolism in occipital cortex and posterior cingulate gyrus, was lower in the real (4 T) and simulated (0 T) MRI environments compared with the PET. This suggests that the metabolic differences are due mainly to the visual field differences characteristic of the MRI and PET instruments. We conclude that a static magnetic field of 4 T does not in itself affect this fairly sensitive measure of brain activity.
Collapse
Affiliation(s)
- N D Volkow
- Medical Department, Brookhaven National Laboratory, Upton, New York, USA.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Giraud AL, Lorenzi C, Ashburner J, Wable J, Johnsrude I, Frackowiak R, Kleinschmidt A. Representation of the temporal envelope of sounds in the human brain. J Neurophysiol 2000; 84:1588-98. [PMID: 10980029 DOI: 10.1152/jn.2000.84.3.1588] [Citation(s) in RCA: 240] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The cerebral representation of the temporal envelope of sounds was studied in five normal-hearing subjects using functional magnetic resonance imaging. The stimuli were white noise, sinusoidally amplitude-modulated at frequencies ranging from 4 to 256 Hz. This range includes low AM frequencies (up to 32 Hz) essential for the perception of the manner of articulation and syllabic rate, and high AM frequencies (above 64 Hz) essential for the perception of voicing and prosody. The right lower brainstem (superior olivary complex), the right inferior colliculus, the left medial geniculate body, Heschl's gyrus, the superior temporal gyrus, the superior temporal sulcus, and the inferior parietal lobule were specifically responsive to AM. Global tuning curves in these regions suggest that the human auditory system is organized as a hierarchical filter bank, each processing level responding preferentially to a given AM frequency, 256 Hz for the lower brainstem, 32-256 Hz for the inferior colliculus, 16 Hz for the medial geniculate body, 8 Hz for the primary auditory cortex, and 4-8 Hz for secondary regions. The time course of the hemodynamic responses showed sustained and transient components with reverse frequency dependent patterns: the lower the AM frequency the better the fit with a sustained response model, the higher the AM frequency the better the fit with a transient response model. Using cortical maps of best modulation frequency, we demonstrate that the spatial representation of AM frequencies varies according to the response type. Sustained responses yield maps of low frequencies organized in large clusters. Transient responses yield maps of high frequencies represented by a mosaic of small clusters. Very few voxels were tuned to intermediate frequencies (32-64 Hz). We did not find spatial gradients of AM frequencies associated with any response type. Our results suggest that two frequency ranges (up to 16 and 128 Hz and above) are represented in the cortex by different response types. However, the spatial segregation of these two ranges is not systematic. Most cortical regions were tuned to low frequencies and only a few to high frequencies. Yet, voxels that show a preference for low frequencies were also responsive to high frequencies. Overall, our study shows that the temporal envelope of sounds is processed by both distinct (hierarchically organized series of filters) and shared (high and low AM frequencies eliciting different responses at the same cortical locus) neural substrates. This layout suggests that the human auditory system is organized in a parallel fashion that allows a degree of separate routing for groups of AM frequencies conveying different information and preserves a possibility for integration of complementary features in cortical auditory regions.
Collapse
Affiliation(s)
- A L Giraud
- Wellcome Department of Cognitive Neurology, Institute of Neurology, London WC1N 3BG, United Kingdom.
| | | | | | | | | | | | | |
Collapse
|
31
|
Shah NJ, Steinhoff S, Mirzazade S, Zafiris O, Grosse-Ruyken ML, Jäncke L, Zilles K. The effect of sequence repeat time on auditory cortex stimulation during phonetic discrimination. Neuroimage 2000; 12:100-8. [PMID: 10875906 DOI: 10.1006/nimg.2000.0588] [Citation(s) in RCA: 39] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Acoustic noise generated by the MR scanner gradient system during fMRI studies of auditory function is a very significant potential confound. Despite these deleterious effects, fMRI of the auditory cortex has been successful and numerous practitioners have circumvented the problem of acoustic masking noise. In the context of auditory cortex fMRI, the sequence repeat time (TR) has the effect of altering the length of time during which the scanner is quiet. Indeed, the move to whole-brain fMRI makes the problem of acoustic noise more acute and points to the need to examine the role of TR and its influence on the BOLD signal. The aim of this study was to examine the effect of varying the TR time on activation of auditory cortex during presentation and performance of a phonetic discrimination task. The results presented here demonstrate that the influence of sequence repeat time is considerable. For a short repeat time it is likely that the noise from the scanner is a significant mask and hinders accurate task performance. At the other extreme, a repeat time of 9 s is also suboptimal, probably due to attentional effects and lack of concentration and not least because of the longer overall measurement times. The results of this study point to a complicated interplay between psychophysical factors as well as physical parameters; attention, acoustic noise, total duration of the experiment, consideration of the volume of acquisition, and overall difficulty of the task have to be assessed and balanced. For the paradigm used here, the results suggest an optimal TR of around 6 s for a 16-slice acquisition.
Collapse
Affiliation(s)
- N J Shah
- Institut für Medizin, Forschungszentrum Jülich GmbH, Jülich, D-52425, Germany
| | | | | | | | | | | | | |
Collapse
|
32
|
Buchanan TW, Lutz K, Mirzazade S, Specht K, Shah NJ, Zilles K, Jäncke L. Recognition of emotional prosody and verbal components of spoken language: an fMRI study. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2000; 9:227-38. [PMID: 10808134 DOI: 10.1016/s0926-6410(99)00060-9] [Citation(s) in RCA: 280] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
This study examined the neural areas involved in the recognition of both emotional prosody and phonemic components of words expressed in spoken language using echo-planar, functional magnetic resonance imaging (fMRI). Ten right-handed males were asked to discriminate words based on either expressed emotional tone (angry, happy, sad, or neutral) or phonemic characteristics, specifically, initial consonant sound (bower, dower, power, or tower). Significant bilateral activity was observed in the detection of both emotional and verbal aspects of language when compared to baseline activity. We found that the detection of emotion compared with verbal detection resulted in significant activity in the right inferior frontal lobe. Conversely, the detection of verbal stimuli compared with the detection of emotion activated left inferior frontal lobe regions most significantly. Specific analysis of the anterior auditory cortex revealed increased right hemisphere activity during the detection of emotion compared to activity during verbal detection. These findings illustrate bilateral involvement in the detection of emotion in language while concomitantly showing significantly lateralized activity in both emotional and verbal detection, in both the temporal and frontal lobes.
Collapse
Affiliation(s)
- T W Buchanan
- Department of Psychiatry and Behavioral Sciences, University of Oklahoma Health Sciences Center, and Veterans Affairs Medical Center, Oklahoma City, OK 73104, USA
| | | | | | | | | | | | | |
Collapse
|