201
|
Stamatakis EA, Marslen-Wilson WD, Tyler LK, Fletcher PC. Cingulate control of fronto-temporal integration reflects linguistic demands: A three-way interaction in functional connectivity. Neuroimage 2005; 28:115-21. [PMID: 16023871 DOI: 10.1016/j.neuroimage.2005.06.012] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2004] [Revised: 05/27/2005] [Accepted: 06/01/2005] [Indexed: 11/20/2022] Open
Abstract
In a recent fMRI language comprehension study, we asked participants to listen to word-pairs and to make same/different judgments for regularly and irregularly inflected word forms [Tyler, L.K., Stamatakis, E.A., Post, B., Randall, B., Marslen-Wilson, W.D., in press. Temporal and frontal systems in speech comprehension: an fMRI study of past tense processing. Neuropsychologia, available online.]. We found that a fronto-temporal network, including the anterior cingulate cortex (ACC), left inferior frontal gyrus (LIFG), bilateral superior temporal gyrus (STG) and middle temporal gyrus (MTG), is preferentially activated for regularly inflected words. We report a complementary re-analysis of the data seeking to understand the behavior of this network in terms of inter-regional covariances, which are taken as an index of functional connectivity. We identified regions in which activity was predicted by ACC and LIFG activity, and critically, by the interaction between these two regions. Furthermore, we determined the extent to which these inter-regional correlations were influenced differentially by the experimental context (i.e. regularly or irregularly inflected words). We found that functional connectivity between LIFG and left MTG is positively modulated by activity in the ACC and that this effect is significantly greater for regulars than irregulars. These findings suggest a monitoring role for the ACC which, in the context of processing regular inflected words, is associated with greater engagement of an integrated fronto-temporal language system.
Collapse
Affiliation(s)
- E A Stamatakis
- Department of Experimental Psychology, University of Cambridge, UK.
| | | | | | | |
Collapse
|
202
|
Chiu CYP, Coen-Cummings M, Schmithorst VJ, Holland SK, Keith R, Nabors L, Kramer M, Rozier H. Sound blending in the brain: a functional magnetic resonance imaging investigation. Neuroreport 2005; 16:883-6. [PMID: 15931055 DOI: 10.1097/00001756-200506210-00002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The presence of high levels of background noise is a serious concern for functional magnetic resonance imaging studies of phonological processing using conventional methods. As a result, many such studies have focused on phonological units larger than phonemes (e.g. syllables) or used stimuli presented in the visual (e.g. printed letters) rather than the auditory domain. We used a recently developed functional magnetic resonance imaging method to present spoken stimuli without the scanner's background noise. Young adult participants mentally blended phonemes in a series (e.g. /b/, /ae/, /t/), counted the number of discrete tones, or rested. Relative to tone counting, sound blending elicited activation in bilateral temporal and prefrontal cortices with left asymmetry. Activation within the dorsoposterior inferior frontal gyrus, a subregion of Broca's area, was negatively correlated with sound-blending accuracy. Our findings are consistent with prior studies ascribing a role of general sequencing, motor and articulatory programming, and vocal or subvocal articulatory rehearsal to this brain region.
Collapse
Affiliation(s)
- C-Y Peter Chiu
- Department of Psychology, University of Cincinnati, Cincinnati, Ohio, USA.
| | | | | | | | | | | | | | | |
Collapse
|
203
|
Blumstein SE, Myers EB, Rissman J. The Perception of Voice Onset Time: An fMRI Investigation of Phonetic Category Structure. J Cogn Neurosci 2005; 17:1353-66. [PMID: 16197689 DOI: 10.1162/0898929054985473] [Citation(s) in RCA: 107] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
This study explored the neural systems underlying the perception of phonetic category structure by investigating the perception of a voice onset time (VOT) continuum in a phonetic categorization task. Stimuli consisted of five synthetic speech stimuli which ranged in VOT from 0 msec ([da]) to 40 msec ([ta]). Results from 12 subjects showed that the neural system is sensitive to VOT differences of 10 msec and that details of phonetic category structure are retained throughout the phonetic processing stream. Both the left inferior frontal gyrus (IFG) and cingulate showed graded activation as a function of category membership with increasing activation as stimuli approached the phonetic category boundary. These results are consistent with the view that the left IFG is involved in phonetic decision processes, with the extent of activation influenced by increased resources devoted to resolving phonetic category membership and/or selecting between competing phonetic categories. Activation patterns in the cingulate suggest that it is sensitive to stimulus difficulty and resolving response conflict. In contrast, activation in the posterior left middle temporal gyrus and the left angular gyrus showed modulation of activation only to the “best fit” of the phonetic category, suggesting that these areas are involved in mapping sound structure to its phonetic representation. The superior temporal gyrus (STG) bilaterally showed weaker sensitivity to the differences in phonetic category structure, providing further evidence that the STG is involved in the early analysis of the sensory properties of speech.
Collapse
Affiliation(s)
- Sheila E Blumstein
- Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912, USA.
| | | | | |
Collapse
|
204
|
Bitan T, Manor D, Morocz IA, Karni A. Effects of alphabeticality, practice and type of instruction on reading an artificial script: An fMRI study. ACTA ACUST UNITED AC 2005; 25:90-106. [PMID: 15944143 DOI: 10.1016/j.cogbrainres.2005.04.014] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2004] [Revised: 04/19/2005] [Accepted: 04/20/2005] [Indexed: 11/18/2022]
Abstract
In neuroimaging studies of word reading in natural scripts, the effect of alphabeticality is often confounded with the effect of practice. We used an artificial script to separately manipulate the effects of practice and alphabeticality following training with and without explicit letter instructions. Participants received multi-session training in reading nonsense words, written in an artificial script, wherein each phoneme was represented by 2 discrete symbols . Three training conditions were compared: alphabetical whole words with letter decoding instruction (explicit); alphabetical whole-words (implicit) and non-alphabetical whole-words (arbitrary). Each participant was trained on the arbitrary condition and on one of the alphabetical conditions (explicit or implicit). fMRI scans were acquired after training during reading of trained words and relatively novel words in the alphabetical and arbitrary conditions. Our results showed greater activation in the explicit compared to the arbitrary conditions, but only for relatively-novel words, in the left posterior inferior frontal gyrus (IFG). In the implicit condition, the left posterior IFG was active in both trained and relatively novel words. These results indicate the involvement of the left posterior IFG in letter decoding, and suggest that reading of explicitly well-trained words did not rely on letter decoding, while in implicitly trained words letter decoding persisted into later stages. The superior parietal lobules showed reduced activation for items that received more practice, across all training conditions. Altogether, our results suggest that the alphabeticality of the word, the amount of practice and type of instructions have independent and interacting effects on brain activation during reading.
Collapse
Affiliation(s)
- Tali Bitan
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA.
| | | | | | | |
Collapse
|
205
|
Katzir T, Misra M, Poldrack RA. Imaging phonology without print: assessing the neural correlates of phonemic awareness using fMRI. Neuroimage 2005; 27:106-15. [PMID: 15901490 DOI: 10.1016/j.neuroimage.2005.04.013] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2004] [Revised: 03/28/2005] [Accepted: 04/04/2005] [Indexed: 11/29/2022] Open
Abstract
Acquisition of phonological processing skills, such as the ability to segment words into corresponding speech sounds, is critical to the development of efficient reading. Prior neuroimaging studies of phonological processing have often relied on auditory stimuli or print-mediated tasks that may be problematic for various theoretical and empirical reasons. For the current study, we developed a task to evaluate phonological processing that used visual stimuli but did not require interpretation of orthographic forms. This task requires the subject to retrieve the names of objects and to compare their first sounds; then, the subject must indicate if the initial sounds of the names of the pictures are the same. The phonological analysis task was compared to both a baseline matching task and a more complex control condition in which the participants evaluated two different pictures and indicated whether they represented the same object. The complex picture-matching condition controls for the visual complexity of the stimuli but does not require phonological analysis of the names of the objects. While both frontal and ventral posterior areas were activated in response to phonological analysis of the names of pictures, only inferior and superior frontal gyrus exhibited differential sensitivity to the phonological comparison task as compared to the complex picture-matching control task. These findings suggest that phonological processing that is not mediated by print relies primarily on frontal language processing areas among skilled readers.
Collapse
Affiliation(s)
- Tami Katzir
- Harvard Graduate School of Education, Cambridge, MA 02138, USA
| | | | | |
Collapse
|
206
|
Rüschemeyer SA, Fiebach CJ, Kempe V, Friederici AD. Processing lexical semantic and syntactic information in first and second language: fMRI evidence from German and Russian. Hum Brain Mapp 2005; 25:266-86. [PMID: 15849713 PMCID: PMC6871675 DOI: 10.1002/hbm.20098] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
We introduce two experiments that explored syntactic and semantic processing of spoken sentences by native and non-native speakers. In the first experiment, the neural substrates corresponding to detection of syntactic and semantic violations were determined in native speakers of two typologically different languages using functional magnetic resonance imaging (fMRI). The results show that the underlying neural response of participants to stimuli across different native languages is quite similar. In the second experiment, we investigated how non-native speakers of a language process the same stimuli presented in the first experiment. First, the results show a more similar pattern of increased activation between native and non-native speakers in response to semantic violations than to syntactic violations. Second, the non-native speakers were observed to employ specific portions of the frontotemporal language network differently from those employed by native speakers. These regions included the inferior frontal gyrus (IFG), superior temporal gyrus (STG), and subcortical structures of the basal ganglia.
Collapse
|
207
|
Rimol LM, Specht K, Weis S, Savoy R, Hugdahl K. Processing of sub-syllabic speech units in the posterior temporal lobe: An fMRI study. Neuroimage 2005; 26:1059-67. [PMID: 15894493 DOI: 10.1016/j.neuroimage.2005.03.028] [Citation(s) in RCA: 69] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2004] [Revised: 01/26/2005] [Accepted: 03/16/2005] [Indexed: 11/26/2022] Open
Abstract
The objective of this study was to investigate phonological processing in the brain by using sub-syllabic speech units with rapidly changing frequency spectra. We used isolated stop consonants extracted from natural speech consonant-vowel (CV) syllables, which were digitized and presented through headphones in a functional magnetic resonance imaging (fMRI) paradigm. The stop consonants were contrasted with CV syllables. In order to control for general auditory activation, we used duration- and intensity-matched noise as a third stimulus category. The subjects were seventeen right-handed, healthy male volunteers. BOLD activation responses were acquired on a 1.5-T MR scanner. The auditory stimuli were presented through MR compatible headphones, using an fMRI paradigm with clustered volume acquisition and 12 s repetition time. The consonant vs. noise comparison resulted in unilateral left lateralized activation in the posterior part of the middle temporal gyrus and superior temporal sulcus (MTG/STS). The CV syllable vs. noise comparison resulted in bilateral activation in the same regions, with a leftward asymmetry. The reversed comparisons, i.e., noise vs. speech stimuli, resulted in right hemisphere activation in the supramarginal and superior temporal gyrus, as well as right prefrontal activation. Since the consonant stimuli are unlikely to have activated a semantic-lexical processing system, it seems reasonable to assume that the MTG/STS activation represents phonetic/phonological processing. This may involve the processing of both spectral and temporal features considered important for phonetic encoding.
Collapse
Affiliation(s)
- Lars M Rimol
- Department of Biological and Medical Psychology, Division of Cognitive Neuroscience, University of Bergen, BBB, 9. etg., Jonas Lies vei 91, N-5009 Bergen, Norway.
| | | | | | | | | |
Collapse
|
208
|
Burton MW, Locasto PC, Krebs-Noble D, Gullapalli RP. A systematic investigation of the functional neuroanatomy of auditory and visual phonological processing. Neuroimage 2005; 26:647-61. [PMID: 15955475 DOI: 10.1016/j.neuroimage.2005.02.024] [Citation(s) in RCA: 108] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2004] [Revised: 02/08/2005] [Accepted: 02/17/2005] [Indexed: 11/25/2022] Open
Abstract
Neuroimaging studies of auditory and visual phonological processing have revealed activation of the left inferior and middle frontal gyri. However, because of task differences in these studies (e.g., consonant discrimination versus rhyming), the extent to which this frontal activity is due to modality-specific linguistic processes or to more general task demands involved in the comparison and storage of stimuli remains unclear. An fMRI experiment investigated the functional neuroanatomical basis of phonological processing in discrimination and rhyming tasks across auditory and visual modalities. Participants made either "same/different" judgments on the final consonant or rhyme judgments on auditorily or visually presented pairs of words and pseudowords. Control tasks included "same/different" judgments on pairs of single tones or false fonts and on the final member in pairs of sequences of tones or false fonts. Although some regions produced expected modality-specific activation (i.e., left superior temporal gyrus in auditory tasks, and right lingual gyrus in visual tasks), several regions were active across modalities and tasks, including posterior inferior frontal gyrus (BA 44). Greater articulatory recoding demands for processing of pseudowords resulted in increased activation for pseudowords relative to other conditions in this frontal region. Task-specific frontal activation was observed for auditory pseudoword final consonant discrimination, likely due to increased working memory demands of selection (ventrolateral prefrontal cortex) and monitoring (mid-dorsolateral prefrontal cortex). Thus, the current study provides a systematic comparison of phonological tasks across modalities, with patterns of activation corresponding to the cognitive demands of performing phonological judgments on spoken and written stimuli.
Collapse
Affiliation(s)
- Martha W Burton
- Department of Neurology, University of Maryland School of Medicine, 12-011 Bressler Research Building, 655 W Baltimore Street, Baltimore, MD 21201-1559, USA.
| | | | | | | |
Collapse
|
209
|
Tyler LK, Stamatakis EA, Post B, Randall B, Marslen-Wilson W. Temporal and frontal systems in speech comprehension: an fMRI study of past tense processing. Neuropsychologia 2005; 43:1963-74. [PMID: 16168736 DOI: 10.1016/j.neuropsychologia.2005.03.008] [Citation(s) in RCA: 109] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2004] [Revised: 02/15/2005] [Indexed: 11/19/2022]
Abstract
A prominent issue in cognitive neuroscience is whether language function is instantiated in the brain as a single undifferentiated process, or whether regions of relative specialisation can be demonstrated. The contrast between regular and irregular English verb inflection has been pivotal to this debate. Behavioural dissociations related to different lesion sites in brain-damaged patients suggest that processing regular and irregular past tenses involves different neural systems. Using event-related fMRI in a group of unimpaired young adults, we contrast processing of spoken regular and irregular past tense forms in a same-different judgement task, shown in earlier research with patients to engage left hemisphere language systems. An extensive fronto-temporal network, linking anterior cingulate (ACC), left inferior frontal cortex (LIFC) and bilateral superior temporal gyrus (STG), was preferentially activated for regularly inflected forms. Access to meaning from speech is supported by temporal cortex, but additional processing is required for forms that end in regular inflections, which differentially engage LIFC processes that support morpho-phonological segmentation and grammatical analysis.
Collapse
Affiliation(s)
- Lorraine K Tyler
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB 3EB, UK.
| | | | | | | | | |
Collapse
|
210
|
Ojanen V, Möttönen R, Pekkola J, Jääskeläinen IP, Joensuu R, Autti T, Sams M. Processing of audiovisual speech in Broca's area. Neuroimage 2005; 25:333-8. [PMID: 15784412 DOI: 10.1016/j.neuroimage.2004.12.001] [Citation(s) in RCA: 124] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2003] [Revised: 12/01/2004] [Accepted: 12/01/2004] [Indexed: 11/23/2022] Open
Abstract
We investigated cerebral processing of audiovisual speech stimuli in humans using functional magnetic resonance imaging (fMRI). Ten healthy volunteers were scanned with a 'clustered volume acquisition' paradigm at 3 T during observation of phonetically matching (e.g., visual and acoustic /y/) and conflicting (e.g., visual /a/ and acoustic /y/) audiovisual vowels. Both stimuli activated the sensory-specific auditory and visual cortices, along with the superior temporal, inferior frontal (Broca's area), premotor, and visual-parietal regions bilaterally. Phonetically conflicting vowels, contrasted with matching ones, specifically increased activity in Broca's area. Activity during phonetically matching stimuli, contrasted with conflicting ones, was not enhanced in any brain region. We suggest that the increased activity in Broca's area reflects processing of conflicting visual and acoustic phonetic inputs in partly disparate neuron populations. On the other hand, matching acoustic and visual inputs would converge on the same neurons.
Collapse
Affiliation(s)
- Ville Ojanen
- Laboratory of Computational Engineering, Helsinki University of Technology, P.O. Box 3000, FIN-02015 HUT, Helsinki, Finland.
| | | | | | | | | | | | | |
Collapse
|
211
|
Skipper JI, Nusbaum HC, Small SL. Listening to talking faces: motor cortical activation during speech perception. Neuroimage 2005; 25:76-89. [PMID: 15734345 DOI: 10.1016/j.neuroimage.2004.11.006] [Citation(s) in RCA: 212] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2004] [Revised: 10/28/2004] [Accepted: 11/03/2004] [Indexed: 11/20/2022] Open
Abstract
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Department of Psychology, The University of Chicago, Chicago, IL 60637, USA.
| | | | | |
Collapse
|
212
|
Dehaene-Lambertz G, Pallier C, Serniclaes W, Sprenger-Charolles L, Jobert A, Dehaene S. Neural correlates of switching from auditory to speech perception. Neuroimage 2005; 24:21-33. [PMID: 15588593 DOI: 10.1016/j.neuroimage.2004.09.039] [Citation(s) in RCA: 161] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2003] [Revised: 06/22/2004] [Accepted: 09/17/2004] [Indexed: 11/28/2022] Open
Abstract
Many people exposed to sinewave analogues of speech first report hearing them as electronic glissando and, later, when they switch into a 'speech mode', hearing them as syllables. This perceptual switch modifies their discrimination abilities, enhancing perception of differences that cross phonemic boundaries while diminishing perception of differences within phonemic categories. Using high-density evoked potentials and fMRI in a discrimination paradigm, we studied the changes in brain activity that are related to this change in perception. With ERPs, we observed that phonemic coding is faster than acoustic coding: The electrophysiological mismatch response (MMR) occurred earlier for a phonemic change than for an equivalent acoustic change. The MMR topography was also more asymmetric for a phonemic change than for an acoustic change. In fMRI, activations were also significantly asymmetric, favoring the left hemisphere in both perception modes. Furthermore, switching to the speech mode significantly enhanced activation in the posterior parts of the left superior gyrus and sulcus relative to the non-speech mode. When responses to a change of stimulus were studied, a cluster of voxels in the supramarginal gyrus was activated significantly more by a phonemic change than by an acoustic change. These results demonstrate that phoneme perception in adults relies on a specific and highly efficient left-hemispheric network, which can be activated in top-down fashion when processing ambiguous speech/non-speech stimuli.
Collapse
Affiliation(s)
- Ghislaine Dehaene-Lambertz
- Laboratoire de Sciences Cognitives et Psycholinguistique (EHESS, ENS and CNRS UMR 8554), IFR 49, France.
| | | | | | | | | | | |
Collapse
|
213
|
Gold BT, Balota DA, Kirchhoff BA, Buckner RL. Common and Dissociable Activation Patterns Associated with Controlled Semantic and Phonological Processing: Evidence from fMRI Adaptation. Cereb Cortex 2005; 15:1438-50. [PMID: 15647526 DOI: 10.1093/cercor/bhi024] [Citation(s) in RCA: 121] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Recent evidence suggests specialization of anterior left inferior prefrontal cortex (aLIPC; approximately BA 45/47) for controlled semantics and of posterior LIPC (pLIPC; approximately BA 44/6) for controlled phonology. However, the more automated phonological tasks commonly used raise the possibility that some of the typically extensive aLIPC activation during semantic tasks may relate to controlled language processing beyond the semantic domain. In the present study, an event-related fMRI adaptation paradigm was employed that used a standard controlled semantic task and a phonological task that also emphasized controlled processing. When compared with letter (baseline) processing, significant fMRI task and adaptation effects in the aLIPC and pLIPC regions ( approximately BA 45/47, approximately BA 44) were observed during both semantic and phonological processing, with aLIPC showing the strongest effects during semantic processing. A left frontal region ( approximately BA 6) showed task and relative adaptation effects preferential for phonological processing, and a left temporal region ( approximately BA 21) showed task and relative adaptation effects preferential for semantic processing. Our results demonstrate that aLIPC and pLIPC regions are involved in controlled processing across multiple language domains, arguing against a domain-specific LIPC model and for domain-preferentiality in left posterior frontal and temporal regions.
Collapse
Affiliation(s)
- Brian T Gold
- Department of Anatomy and Neurobiology, Chandler Medical Center, University of Kentucky, Lexington, KY, USA
| | | | | | | |
Collapse
|
214
|
Doherty CP, West WC, Dilley LC, Shattuck-Hufnagel S, Caplan D. Question/statement judgments: an fMRI study of intonation processing. Hum Brain Mapp 2004; 23:85-98. [PMID: 15340931 PMCID: PMC6871843 DOI: 10.1002/hbm.20042] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
We examined changes in fMRI BOLD signal associated with question/statement judgments in an event-related paradigm to investigate the neural basis of processing one aspect of intonation. Subjects made judgments about digitized recordings of three types of utterances: questions with rising intonation (RQ; e.g., "She was talking to her father?"), statements with a falling intonation (FS; e.g., "She was talking to her father."), and questions with a falling intonation and a word order change (FQ; e.g., "Was she talking to her father?"). Functional echo planar imaging (EPI) scans were collected from 11 normal subjects. There was increased BOLD activity in bilateral inferior frontal and temporal regions for RQ over either FQ or FS stimuli. The study provides data relevant to the location of regions responsive to intonationally marked illocutionary differences between questions and statements.
Collapse
Affiliation(s)
- Colin P Doherty
- Neuropsychology Laboratory and MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, USA.
| | | | | | | | | |
Collapse
|
215
|
Cohen L, Jobert A, Le Bihan D, Dehaene S. Distinct unimodal and multimodal regions for word processing in the left temporal cortex. Neuroimage 2004; 23:1256-70. [PMID: 15589091 DOI: 10.1016/j.neuroimage.2004.07.052] [Citation(s) in RCA: 188] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2004] [Revised: 07/08/2004] [Accepted: 07/14/2004] [Indexed: 11/17/2022] Open
Abstract
How are word recognition circuits organized in the left temporal lobe? We used functional magnetic resonance imaging (fMRI) to dissect cortical word-processing circuits using three diagnostic criteria: the capacity of an area (1) to respond to words in a single modality (visual or auditory) or in both modalities, (2) to modulate its response in a top-down manner as a function of the graphemic or phonemic emphasis of the task, and (3) to show repetition suppression in response to the conscious repetition of the target word within the same sensory modality or across different modalities. The results clarify the organization of visual and auditory word-processing streams. In particular, the visual word form area (VWFA) in the left occipitotemporal sulcus appears strictly as a visual unimodal area. It is, however, bordered by a second lateral inferotemporal area which is multimodal [lateral inferotemporal multimodal area (LIMA)]. Both areas might have been confounded in past work. Our results also suggest a possible homolog of the VWFA in the auditory stream, the auditory word form area, located in the left anterior superior temporal sulcus.
Collapse
Affiliation(s)
- Laurent Cohen
- Institut de Neurologie, Hôpital de la Salpêtrière, AP-HP, Paris, France.
| | | | | | | |
Collapse
|
216
|
LoCasto PC, Krebs-Noble D, Gullapalli RP, Burton MW. An fMRI Investigation of Speech and Tone Segmentation. J Cogn Neurosci 2004; 16:1612-24. [PMID: 15601523 DOI: 10.1162/0898929042568433] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Recent research strongly indicates that phonological tasks activate a subregion of the inferior frontal gyrus. The purpose of the present fMRI study was to investigate the extent to which activation of this region during phonological processing is due to speech processes per se such as articulatory recoding or to other cognitive task demands such as working memory. Thus, we compared activation patterns during segmentation of speech and tone sequences to a tone discrimination task. In particular, participants performed same/different judgments on pairs of words, pseudowords, and tone sequences that required segmentation of a continuous acoustic signal as well as tone pairs that did not require segmentation. Accuracy and reaction time data showed that speech and tone sequence segmentation conditions patterned more similarly to each other than to tone discrimination pairs. Analyses of group data revealed strong activation of the region at the border of the left inferior and middle frontal gyrus for all three segmentation conditions compared to tone discrimination, but no consistent differences were observed when word and pseudoword segmentation were directly contrasted. Analyses of individual subjects indicated that a large number of participants activated a small area of the middle frontal gyrus during the speech conditions compared to the sequences. These results suggest that a significant portion of active frontal areas is recruited for extracting acoustic information and maintaining it in memory for decision. However, some regions at the border of the inferior/middle frontal gyrus may be unique to speech segmentation.
Collapse
Affiliation(s)
- Paul C LoCasto
- University of Maryland School of Medicine, Baltimore, MD 21201-1559, USA
| | | | | | | |
Collapse
|
217
|
Chee MWL, Soon CS, Lee HL, Pallier C. Left insula activation: a marker for language attainment in bilinguals. Proc Natl Acad Sci U S A 2004; 101:15265-70. [PMID: 15469927 PMCID: PMC523445 DOI: 10.1073/pnas.0403703101] [Citation(s) in RCA: 88] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Several lines of evidence suggest the importance of phonological working memory (PWM) in language acquisition. We investigated the neural correlates of PWM in young adults who were under compelling social pressure to be bilingual. Equal bilinguals had high proficiency in English and Chinese as measured by a standardized examination, whereas unequal bilinguals were proficient in English but not Chinese. Both groups were matched on several measures of nonverbal intelligence and working memory. In-scanner behavioral results did not show between-group differences. Of the regions showing load-dependent increments in activation, the left insula showed greater activation in equal bilinguals. Unequal bilinguals showed greater task-related deactivation in the anterior medial frontal region and greater anterior cingulate activation. Although unequal bilinguals kept apace with equal bilinguals in the simple PWM task, the differential cortical activations suggest that more optimal engagement of PWM in the latter may correlate with better second-language attainment.
Collapse
Affiliation(s)
- Michael W L Chee
- Cognitive Neuroscience Laboratory, Singapore General Hospital, Singapore 169611.
| | | | | | | |
Collapse
|
218
|
Abstract
Time is a fundamental dimension of behavior and as such underlies the perception and production of speech. This paper reviews patient and neuroimaging studies that investigated brain structures that support temporal aspects of speech. The left-frontal cortex, the basal ganglia, and the cerebellum represent structures that have been implicated repeatedly. A comparison with the structures involved in the timing of non-speech events (e.g., tones, lights, finger movements) suggests both commonalities and differences: while the basal ganglia and the cerebellum contribute to the timing of speech and non-speech events, the contribution of left-frontal cortex seems to be specific to speech or rapidly changing acoustic information. Motivated by these commonalities and differences, this paper presents assumptions about the function of basal ganglia, cerebellum, and cortex in the timing of speech.
Collapse
Affiliation(s)
- Annett Schirmer
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany.
| |
Collapse
|
219
|
Richter S, Kaiser O, Hein-Kropp C, Dimitrova A, Gizewski E, Beck A, Aurich V, Ziegler W, Timmann D. Preserved verb generation in patients with cerebellar atrophy. Neuropsychologia 2004; 42:1235-46. [PMID: 15178175 DOI: 10.1016/j.neuropsychologia.2004.01.006] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2003] [Revised: 01/27/2004] [Accepted: 01/29/2004] [Indexed: 11/16/2022]
Abstract
A role of the right cerebellar hemisphere has been suggested in linguistic functions. Nevertheless, studies of verb generation in cerebellar patients provide inconsistent results. The aim of the present study was to examine verb generation in a larger group of cerebellar patients with well-defined lesions. Ten subjects with degenerative cerebellar disorders and ten healthy matched controls participated. Subjects had to generate verbs to the blocked presentation of photographs of objects (i.e. four blocks of sixteen objects). As control condition, the objects had to be named. Furthermore, dysarthria was quantified by means of a sentence production and syllable repetition task. Volumetric analysis of individual 3D-MR scans was performed to quantify cerebellar atrophy. Cerebellar patients were slower in the sentence production and syllable repetition tasks, and cerebellar volume was decreased compared to controls. Despite cerebellar atrophy and dysarthria, the answers produced did not differ between patients and controls. In addition, both groups revealed the same amount of decrease in verbal reaction time over blocks (i.e. learning). The results suggest that the role of the cerebellum in verb generation is less pronounced than previously suggested.
Collapse
Affiliation(s)
- S Richter
- Department of Neurology, University of Duisburg-Essen, Hufelandstrasse 55, 45122 Essen, Germany.
| | | | | | | | | | | | | | | | | |
Collapse
|
220
|
Kaiser J, Hertrich I, Ackermann H, Mathiak K, Lutzenberger W. Hearing lips: gamma-band activity during audiovisual speech perception. ACTA ACUST UNITED AC 2004; 15:646-53. [PMID: 15342432 DOI: 10.1093/cercor/bhh166] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Auditory pattern changes have been shown to elicit increases in magnetoencephalographic gamma-band activity (GBA) over left inferior frontal cortex, forming part of the putative auditory ventral "what" processing stream. The present study employed a McGurk-type paradigm to assess whether GBA would be associated with subjectively perceived changes even when auditory stimuli remain unchanged. Magnetoencephalograms were recorded in 16 human subjects during audiovisual mismatch perception. Both infrequent visual (auditory /ta/ + visual /pa/) and acoustic deviants (auditory/pa/ + visual /ta/) were compared with frequent audiovisual standards (auditory /ta/ and visual /ta/). Statistical probability mapping revealed spectral amplitude increases at approximately 75 and approximately 78 Hz to visual deviants. GBA to visual deviants peaked 160 ms after auditory stimulus onset over posterior parietal cortex, at 270 ms over occipital areas and at 320 ms over left inferior frontal cortex. The latter GBA enhancement was consistent with the increase observed previously to pure acoustic mismatch, supporting a role of left inferior frontal cortex for the representation of perceived auditory pattern change. The preceding gamma-band changes over posterior areas may reflect processing of incongruent lip movements in visual motion areas and back-projections to earlier visual cortex.
Collapse
Affiliation(s)
- Jochen Kaiser
- MEG-Center, Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Gartenstrasse 29, 72074 Tübingen, Germany.
| | | | | | | | | |
Collapse
|
221
|
Rowan A, Liégeois F, Vargha-Khadem F, Gadian D, Connelly A, Baldeweg T. Cortical lateralization during verb generation: a combined ERP and fMRI study. Neuroimage 2004; 22:665-75. [PMID: 15193595 DOI: 10.1016/j.neuroimage.2004.01.034] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2003] [Revised: 01/20/2004] [Accepted: 01/21/2004] [Indexed: 10/26/2022] Open
Abstract
Lateralization of scalp-recorded event-related potentials (ERPs) and functional MRI (fMRI) activation was investigated using a verb generation task in 10 healthy right-handed adults. ERPs showed an early transient positivity in the left inferior temporal region (500-1250 ms) following auditory presentation of the stimulus noun. A sustained slow cortical negativity of later onset (1250-3000 ms) was then recorded, most pronounced over left inferior frontal regions. fMRI data were in agreement with both ERP effects, showing left lateralized activation in inferior and superior temporal as well as inferior frontal cortices. Lateralized ERP effects occurred during the verb generation task but not during passive word listening or during word- and nonword repetition. Thus, ERPs and fMRI provided convergent evidence regarding language lateralization, with ERPs revealing the temporal sequence of posterior to anterior cortical activation during semantic retrieval.
Collapse
Affiliation(s)
- Alison Rowan
- Developmental Cognitive Neuroscience Unit, Institute of Child Health, University College London, London, WC1N 1EH, UK
| | | | | | | | | | | |
Collapse
|
222
|
Seki A, Okada T, Koeda T, Sadato N. Phonemic manipulation in Japanese: an fMRI study. ACTA ACUST UNITED AC 2004; 20:261-72. [PMID: 15183397 DOI: 10.1016/j.cogbrainres.2004.03.012] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2004] [Indexed: 11/30/2022]
Abstract
Phonological awareness is the ability to manipulate abstract phonological representations of language and is crucial to the process of learning to read. The neural substrates underlying this appear to be modality-independent at least in alphabetic languages. Japanese language has different orthographic "kana" system, in which each "kana" character strictly corresponds to a syllable. To investigate the neural substrates underlying phonological manipulation of the Japanese language, functional magnetic resonance imaging (fMRI) was used. Neuroimaging data were obtained from adult healthy volunteers during auditory and visual vowel exchange tasks, identical except for the modality of stimuli presentation: a voice and Japanese "kana" characters. Cerebellar vermis was activated by vowel exchange tasks of both modalities. The posterior parts of the superior temporal sulcus (STS) were active during the auditory tasks, suggesting that phonological representations of auditory stimuli are manipulated in this area. These findings are consistent with the previous studies with alphabetic languages. In contrast, the intraparietal sulci, which has been implicated for visuospatial tasks, was active during the visual tasks. This modality-dependent activation may indicate that the simple orthographic rule of the Japanese allows an alternate visual strategy to conduct the phonological awareness task, bypassing manipulation of phonological representation.
Collapse
Affiliation(s)
- Ayumi Seki
- Institute of Neurological Sciences, Faculty of Medicine, Tottori University, Yonago, Japan
| | | | | | | |
Collapse
|
223
|
Rönnberg J, Rudner M, Ingvar M. Neural correlates of working memory for sign language. ACTA ACUST UNITED AC 2004; 20:165-82. [PMID: 15183389 DOI: 10.1016/j.cogbrainres.2004.03.002] [Citation(s) in RCA: 58] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2004] [Indexed: 11/27/2022]
Abstract
Eight, early bilingual, sign language interpreters participated in a PET study, which compared working memory for Swedish Sign Language (SSL) with working memory for audiovisual Swedish speech. The interaction between language modality and memory task was manipulated in a within-subjects design. Overall, the results show a previously undocumented, language modality-specific working memory neural architecture for SSL, which relies on a network of bilateral temporal, bilateral parietal and left premotor activation. In addition, differential activation in the right cerebellum was found for the two language modalities. Similarities across language modality are found in Broca's area for all tasks and in the anterior left inferior frontal lobe for semantic retrieval. The bilateral parietal activation pattern for sign language bears similarity to neural activity during, e.g., nonverbal visuospatial tasks, and it is argued that this may reflect generation of a virtual spatial array. Aspects of the data suggesting an age of acquisition effect are also considered. Furthermore, it is discussed why the pattern of parietal activation cannot be explained by factors relating to perception, production or recoding of signs, or to task difficulty. The results are generally compatible with Wilson's [Psychon. Bull. Rev. 8 (2001) 44] account of working memory.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences, Linköpings universitet, Swedish Institute for Disability Research, S-581 83 Linköping, Sweden.
| | | | | |
Collapse
|
224
|
Cohen L, Dehaene S. Specialization within the ventral stream: the case for the visual word form area. Neuroimage 2004; 22:466-76. [PMID: 15110040 DOI: 10.1016/j.neuroimage.2003.12.049] [Citation(s) in RCA: 478] [Impact Index Per Article: 23.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2003] [Revised: 12/11/2003] [Accepted: 12/23/2003] [Indexed: 11/18/2022] Open
Abstract
Is there specialization for visual word recognition within the visual ventral stream of literate human adults? We review the evidence for a specialized "visual word form area" and critically examine some of the arguments recently placed against this hypothesis. Three distinct forms of specialization must be distinguished: functional specialization, reproducible localization, and regional selectivity. Examination of the literature with this theoretical division in mind indicates that reading activates a precise subpart of the left ventral occipitotemporal sulcus, and that patients with pure alexia consistently exhibit lesions of this region (reproducible localization). Second, this region implements processes adequate for reading in a specific script, such as invariance across upper- and lower-case letters, and its lesion results in the selective loss of reading-specific processes (functional specialization). Third, the issue of regional selectivity, namely, the existence of putative cortical patches dedicated to letter and word recognition, cannot be resolved by positron emission tomography or lesion data, but requires high-resolution neuroimaging techniques. The available evidence from single-subject fMRI and intracranial recordings suggests that some cortical sites respond preferentially to letter strings than to other categories of visual stimuli such as faces or objects, though the preference is often relative rather than absolute. We conclude that learning to read results in the progressive development of an inferotemporal region increasingly responsive to visual words, which is aptly named the visual word form area (VWFA).
Collapse
|
225
|
Hickok G, Poeppel D. Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 2004; 92:67-99. [PMID: 15037127 DOI: 10.1016/j.cognition.2003.10.011] [Citation(s) in RCA: 1330] [Impact Index Per Article: 66.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2001] [Revised: 06/24/2002] [Accepted: 10/23/2003] [Indexed: 11/28/2022]
Abstract
Despite intensive work on language-brain relations, and a fairly impressive accumulation of knowledge over the last several decades, there has been little progress in developing large-scale models of the functional anatomy of language that integrate neuropsychological, neuroimaging, and psycholinguistic data. Drawing on relatively recent developments in the cortical organization of vision, and on data from a variety of sources, we propose a new framework for understanding aspects of the functional anatomy of language which moves towards remedying this situation. The framework posits that early cortical stages of speech perception involve auditory fields in the superior temporal gyrus bilaterally (although asymmetrically). This cortical processing system then diverges into two broad processing streams, a ventral stream, which is involved in mapping sound onto meaning, and a dorsal stream, which is involved in mapping sound onto articulatory-based representations. The ventral stream projects ventro-laterally toward inferior posterior temporal cortex (posterior middle temporal gyrus) which serves as an interface between sound-based representations of speech in the superior temporal gyrus (again bilaterally) and widely distributed conceptual representations. The dorsal stream projects dorso-posteriorly involving a region in the posterior Sylvian fissure at the parietal-temporal boundary (area Spt), and ultimately projecting to frontal regions. This network provides a mechanism for the development and maintenance of "parity" between auditory and motor representations of speech. Although the proposed dorsal stream represents a very tight connection between processes involved in speech perception and speech production, it does not appear to be a critical component of the speech perception process under normal (ecologically natural) listening conditions, that is, when speech input is mapped onto a conceptual representation. We also propose some degree of bi-directionality in both the dorsal and ventral pathways. We discuss some recent empirical tests of this framework that utilize a range of methods. We also show how damage to different components of this framework can account for the major symptom clusters of the fluent aphasias, and discuss some recent evidence concerning how sentence-level processing might be integrated into the framework.
Collapse
|
226
|
Small SL, Nusbaum HC. On the neurobiological investigation of language understanding in context. BRAIN AND LANGUAGE 2004; 89:300-311. [PMID: 15068912 DOI: 10.1016/s0093-934x(03)00344-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/12/2003] [Indexed: 05/24/2023]
Abstract
There are two significant problems in using functional neuroimaging methods to study language. Improving the state of functional brain imaging will depend on understanding how the dependent measure of brain imaging differs from behavioral dependent measures (the "dependent measure problem") and how the activation of the motor system may be confounded with non-motor aspects of processing in certain experimental designs (the "motor output problem"). To address these problems, it may be necessary to shift the focus of language research from the study of linguistic competence to the understanding of language use. This will require investigations of language processing in full multi-modal and environmental context, monitoring of natural behaviors, novel experimental design, and network-based analysis. Such a combined naturalistic approach could lead to tremendous new insights into language and the brain.
Collapse
Affiliation(s)
- Steven L Small
- Department of Neurology, and Committee on Computational Neuroscience, Brain Research Imaging Center, The University of Chicago, 5841 South Maryland Avenue, MC-2030, Chicago, IL 60637, USA.
| | | |
Collapse
|
227
|
Boatman D. Cortical bases of speech perception:evidence from functional lesion studies. Cognition 2004; 92:47-65. [PMID: 15037126 DOI: 10.1016/j.cognition.2003.09.010] [Citation(s) in RCA: 70] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2002] [Revised: 04/02/2002] [Accepted: 09/30/2002] [Indexed: 11/17/2022]
Abstract
Functional lesion studies have yielded new information about the cortical organization of speech perception in the human brain. We will review a number of recent findings, focusing on studies of speech perception that use the techniques of electrocortical mapping by cortical stimulation and hemispheric anesthetization by intracarotid amobarbital. Implications for recent developments in neuroimaging studies of speech perception will be discussed. This discussion will provide the framework for a developing model of the cortical circuitry critical for speech perception.
Collapse
Affiliation(s)
- Dana Boatman
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
228
|
Hernandez AE, Kotz SA, Hofmann J, Valentin VV, Dapretto M, Bookheimer SY. The neural correlates of grammatical gender decisions in Spanish. Neuroreport 2004; 15:863-6. [PMID: 15073532 DOI: 10.1097/00001756-200404090-00026] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
In the current study, nine participants were asked to make gender decisions for a set of Spanish nouns while being scanned with functional MRI (fMRI). Words were chosen in which a direct mapping between ending and gender ("transparent" items such as carro(fem) or casa(masc)) is present and those in which there is not a direct relationship ("opaque" items such as fuente(fem) or arroz(masc)). Direct comparisons between opaque and transparent words revealed increased activity in left BA44/45, and BA44/6 as well as bilateral activation near BA 47/insula and the anterior cingulate gyrus. These results reveal activity in areas previously found to be devoted to articulation of the determiner and to morphological processing. Taken together they support the notion that gender decisions for opaque items requires deeper and more effortful processing during the retrieval of lexical and syntactic information.
Collapse
Affiliation(s)
- Arturo E Hernandez
- Department of Psychology, University of Houston, 126 Heyne Building, Houston, TX 77204-5022, USA.
| | | | | | | | | | | |
Collapse
|
229
|
Mathiak K, Hertrich I, Grodd W, Ackermann H. Discrimination of temporal information at the cerebellum: functional magnetic resonance imaging of nonverbal auditory memory. Neuroimage 2004; 21:154-62. [PMID: 14741652 DOI: 10.1016/j.neuroimage.2003.09.036] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
Until recently, the cerebellum was held to play its chief role in motor control. By contrast, Keele and Ivry (1990) proposed that it may subserve time estimation within the perceptual domain as well. In accordance with this suggestion, speech perception requiring minute differentiation of time intervals was found compromised by cerebellar pathology a subsequent functional magnetic resonance imaging (fMRI) study found hemodynamic activation of the right neocerebellum under these conditions. In the current fMRI investigation a non-speech task involving duration storage and comparison yielded significant hemodynamic responses within the lateral Crus I area of the right cerebellar hemisphere. Concomitantly, a left prefrontal cluster was observed. The present fMRI study employed single-shot double-echo echo-planar imaging (EPI) to reduce image distortion and acquisition time with whole-brain coverage (TE = 28 and 66 ms, TR = 5 s, 28 slices, TA = 2.8 s). Twelve healthy subjects performed two tasks: identifying pauses between tones as "short" or "long" (30-130 ms) and deciding which of two successive pauses was longer. The activation pattern in the discrimination task was analogous to that seen during speech perception and verbal working memory (WM) tasks. We suggest that the storage of precise temporal structures relies on a cerebellar-prefrontal loop. This network allows for temporal organization of verbal sequences and phoneme encoding based on durational operations in a linguistic context.
Collapse
Affiliation(s)
- Klaus Mathiak
- Department of Neurology, University of Tübingen, D-72076, Tübingen, Germany.
| | | | | | | |
Collapse
|
230
|
Abstract
Recent neuroimaging studies provide evidence for a shared neural network for phonological processing in language production and comprehension. The temporal dynamics in this network during comprehension has been investigated by Thierry et al., who showed a primacy for Wernicke's over Broca's area. In the present study, we demonstrate the reversed pattern for language production. These results can be interpreted with respect to the functionality of the different regions within the shared network, with Wernicke's area being the sound form store and Broca's area a processor necessary to extract relevant phonological information from that store.
Collapse
Affiliation(s)
- St Heim
- Max Planck Institute of Cognitive Neuroscience, 04303 Leipzig, Germany.
| | | |
Collapse
|
231
|
Abstract
Earlier formulations of the relation of language and the brain provided oversimplified accounts of the nature of language disorders, classifying patients into syndromes characterized by the disruption of sensory or motor word representations or by the disruption of syntax or semantics. More recent neuropsychological findings, drawn mainly from case studies, provide evidence regarding the various levels of representations and processes involved in single-word and sentence processing. Lesion data and neuroimaging findings are converging to some extent in providing localization of these components of language processing, particularly at the single-word level. Much work remains to be done in developing precise theoretical accounts of sentence processing that can accommodate the observed patterns of breakdown. Such theoretical developments may provide a means of accommodating the seemingly contradictory findings regarding the neural organization of sentence processing.
Collapse
Affiliation(s)
- Randi C Martin
- Psychology Department, Rice University, Houston, Texas 77251-1892, USA.
| |
Collapse
|
232
|
Binder JR, Liebenthal E, Possing ET, Medler DA, Ward BD. Neural correlates of sensory and decision processes in auditory object identification. Nat Neurosci 2004; 7:295-301. [PMID: 14966525 DOI: 10.1038/nn1198] [Citation(s) in RCA: 353] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2003] [Accepted: 01/23/2004] [Indexed: 11/09/2022]
Abstract
Physiological studies of auditory perception have not yet clearly distinguished sensory from decision processes. In this experiment, human participants identified speech sounds masked by varying levels of noise while blood oxygenation signals in the brain were recorded with functional magnetic resonance imaging (fMRI). Accuracy and response time were used to characterize the behavior of sensory and decision components of this perceptual system. Oxygenation signals in a cortical subregion just anterior and lateral to primary auditory cortex predicted accuracy of sound identification, whereas signals in an inferior frontal region predicted response time. Our findings provide neurophysiological evidence for a functional distinction between sensory and decision mechanisms underlying auditory object identification. The present results also indicate a link between inferior frontal lobe activation and response-selection processes during auditory perception tasks.
Collapse
Affiliation(s)
- Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, 9200 W. Wisconsin Avenue, Milwaukee, Wisconsin 53226, USA.
| | | | | | | | | |
Collapse
|
233
|
Scott SK, Rosen S, Wickham L, Wise RJS. A positron emission tomography study of the neural basis of informational and energetic masking effects in speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2004; 115:813-21. [PMID: 15000192 DOI: 10.1121/1.1639336] [Citation(s) in RCA: 102] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Positron emission tomography (PET) was used to investigate the neural basis of the comprehension of speech in unmodulated noise ("energetic" masking, dominated by effects at the auditory periphery), and when presented with another speaker ("informational" masking, dominated by more central effects). Each type of signal was presented at four different signal-to-noise ratios (SNRs) (+3, 0, -3, -6 dB for the speech-in-speech, +6, +3, 0, -3 dB for the speech-in-noise), with listeners instructed to listen for meaning to the target speaker. Consistent with behavioral studies, there was SNR-dependent activation associated with the comprehension of speech in noise, with no SNR-dependent activity for the comprehension of speech-in-speech (at low or negative SNRs). There was, in addition, activation in bilateral superior temporal gyri which was associated with the informational masking condition. The extent to which this activation of classical "speech" areas of the temporal lobes might delineate the neural basis of the informational masking is considered, as is the relationship of these findings to the interfering effects of unattended speech and sound on more explicit working memory tasks. This study is a novel demonstration of candidate neural systems involved in the perception of speech in noisy environments, and of the processing of multiple speakers in the dorso-lateral temporal lobes.
Collapse
Affiliation(s)
- Sophie K Scott
- Department of Psychology, University College London, London WC1E 6BT, Untited Kingdom.
| | | | | | | |
Collapse
|
234
|
Golestani N, Zatorre RJ. Learning new sounds of speech: reallocation of neural substrates. Neuroimage 2004; 21:494-506. [PMID: 14980552 DOI: 10.1016/j.neuroimage.2003.09.071] [Citation(s) in RCA: 151] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2003] [Revised: 08/05/2003] [Accepted: 09/12/2003] [Indexed: 10/26/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) was used to investigate changes in brain activity related to phonetic learning. Ten monolingual English-speaking subjects were scanned while performing an identification task both before and after five sessions of training with a Hindi dental-retroflex nonnative contrast. Behaviorally, training resulted in an improvement in the ability to identify the nonnative contrast. Imaging results suggest that the successful learning of a nonnative phonetic contrast results in the recruitment of the same areas that are involved during the processing of native contrasts, including the left superior temporal gyrus, insula-frontal operculum, and inferior frontal gyrus. Additionally, results of correlational analyses between behavioral improvement and the blood-oxygenation-level-dependent (BOLD) signal obtained during the posttraining Hindi task suggest that the degree of success in learning is accompanied by more efficient neural processing in classical frontal speech regions, and by a reduction of deactivation relative to a noise baseline condition in left parietotemporal speech regions.
Collapse
Affiliation(s)
- Narly Golestani
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, Canada.
| | | |
Collapse
|
235
|
Gandour J, Xu Y, Wong D, Dzemidzic M, Lowe M, Li X, Tong Y. Neural correlates of segmental and tonal information in speech perception. Hum Brain Mapp 2004; 20:185-200. [PMID: 14673803 PMCID: PMC6872106 DOI: 10.1002/hbm.10137] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
The Chinese language provides an optimal window for investigating both segmental and suprasegmental units. The aim of this cross-linguistic fMRI study is to elucidate neural mechanisms involved in extraction of Chinese consonants, rhymes, and tones from syllable pairs that are distinguished by only one phonetic feature (minimal) vs. those that are distinguished by two or more phonetic features (non-minimal). Triplets of Chinese monosyllables were constructed for three tasks comparing consonants, rhymes, and tones. Each triplet consisted of two target syllables with an intervening distracter. Ten Chinese and English subjects were asked to selectively attend to targeted sub-syllabic components and make same-different judgments. Direct between-group comparisons in both minimal and non-minimal pairs reveal increased activation for the Chinese group in predominantly left-sided frontal, parietal, and temporal regions. Within-group comparisons of non-minimal and minimal pairs show that frontal and parietal activity varies for each sub-syllabic component. In the frontal lobe, the Chinese group shows bilateral activation of the anterior middle frontal gyrus (MFG) for rhymes and tones only. Within-group comparisons of consonants, rhymes, and tones show that rhymes induce greater activation in the left posterior MFG for the Chinese group when compared to consonants and tones in non-minimal pairs. These findings collectively support the notion of a widely distributed cortical network underlying different aspects of phonological processing. This neural network is sensitive to the phonological structure of a listener's native language. Hum. Brain Mapping 20:185-200, 2003.
Collapse
Affiliation(s)
- Jack Gandour
- Department of Audiology and Speech Sciences, Purdue University, West Lafayette, Indiana, USA.
| | | | | | | | | | | | | |
Collapse
|
236
|
Breier JI, Simos PG, Fletcher JM, Castillo EM, Zhang W, Papanicolaou AC. Abnormal activation of temporoparietal language areas during phonetic analysis in children with dyslexia. Neuropsychology 2004; 17:610-21. [PMID: 14599274 DOI: 10.1037/0894-4105.17.4.610] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Event-related magnetic fields were recorded using magnetoencephalography in children with (n=12) and without (n=11) dyslexia while they discriminated between pairs of syllables from a voice onset time series (/ga/-/ka/). Nonimpaired readers exhibited left-hemisphere predominance of activity after the resolution of the N1m, whereas children with dyslexia experienced a sharp peak of relative activation in right temporoparietal areas between 300 and 700 ms post-stimulus onset. Increased relative activation in right temporoparietal areas was correlated with reduced performance on phonological processing measures. Results are consistent with the notion that deficits in appreciating the sound structure of both written and spoken language are associated with abnormal neurophysiological activity in temporoparietal language areas in children with dyslexia.
Collapse
Affiliation(s)
- Joshua I Breier
- Department of Neurosurgery, Division of Clinical Neurosciences, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA.
| | | | | | | | | | | |
Collapse
|
237
|
Cohen L, Henry C, Dehaene S, Martinaud O, Lehéricy S, Lemer C, Ferrieux S. The pathophysiology of letter-by-letter reading. Neuropsychologia 2004; 42:1768-80. [PMID: 15351626 DOI: 10.1016/j.neuropsychologia.2004.04.018] [Citation(s) in RCA: 104] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2004] [Revised: 04/28/2004] [Accepted: 04/30/2004] [Indexed: 11/20/2022]
Abstract
Pure alexia is a frequent and incapacitating consequence of left occipitotemporal lesions. It is thought to result from the disruption or the disconnection of the visual word form area (VWFA), a region reproducibly located within the left occipito-temporal sulcus, and encoding the abstract identity of strings of visual letters. Alexic patients often retain effective single letter recognition abilities, and develop an effortful letter-by-letter reading strategy which is the basis of most rehabilitation techniques. We study a patient who developed letter-by-letter reading following the surgical removal of left occipito-temporal regions. Using anatomical and functional MRI in the patient and in normal controls, we show that alexia resulted from the deafferentation of left fusiform cortex, and we analyze the network of brain regions subtending letter-by-letter reading. We propose that during letter-by-letter reading (1) letters are identified in the intact right-hemispheric visual system, with a central role for the region symetrical to the VWFA; (2) letters are serially transferred to the left hemisphere through the intact segment of the corpus callosum; (3) word identity is eventually recovered in the left hemisphere through verbal working memory processes involving inferior frontal and supramarginal cortex.
Collapse
Affiliation(s)
- Laurent Cohen
- Institut de Neurologie, Hôpital de la Salpêtrière, 47/83 Bd de l'Hôpital, 75651 Paris CEDEX 13, France.
| | | | | | | | | | | | | |
Collapse
|
238
|
Valaki CE, Maestu F, Simos PG, Zhang W, Fernandez A, Amo CM, Ortiz TM, Papanicolaou AC. Cortical organization for receptive language functions in Chinese, English, and Spanish: a cross-linguistic MEG study. Neuropsychologia 2004; 42:967-79. [PMID: 14998711 DOI: 10.1016/j.neuropsychologia.2003.11.019] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2003] [Revised: 10/01/2003] [Accepted: 11/11/2003] [Indexed: 11/28/2022]
Abstract
Chinese differs from Indo-European languages in both its written and spoken forms. Being a tonal language, tones convey lexically meaningful information. The current study examines patterns of neurophysiological activity in temporal and temporoparietal brain areas as speakers of two Indo-European languages (Spanish and English) and speakers of Mandarin-Chinese were engaged in a spoken-word recognition task that is used clinically for the presurgical determination of hemispheric dominace for receptive language functions. Brain magnetic activation profiles were obtained from 92 healthy adult volunteers: 30 monolingual native speakers of Mandarin-Chinese, 20 Spanish-speaking, and 42 native speakers of American English. Activation scans were acquired in two different whole-head MEG systems using identical testing methods. Results indicate that (a) the degree of hemispheric asymmetry in the duration of neurophysiological activity in temporal and temporoparietal regions was reduced in the Chinese group, (b) the proportion of individuals who showed bilaterally symmetric activation was significantly higher in this group, and (c) group differences in functional hemispheric asymmetry were first noted after the initial sensory processing of the word stimuli. Furthermore, group differences in the degree of hemispheric asymmetry were primarily due to greater degree of activation in the right temporoparietal region in the Chinese group, suggesting increased participation of this region in the spoken word recognition in Mandarin-Chinese.
Collapse
Affiliation(s)
- C E Valaki
- Facultad de Medicina, Centro de Magnetoencefalografia Dr. Perez Modrego, Universidad Complutense de Madrid, Pabellon No. 8, Avendia Complutense, Madrid, Spain
| | | | | | | | | | | | | | | |
Collapse
|
239
|
Abstract
Languages differ depending on the set of basic sounds they use (the inventory of consonants and vowels) and on the way in which these sounds can be combined to make up words and phrases (phonological grammar). Previous research has shown that our inventory of consonants and vowels affects the way in which our brains decode foreign sounds (Goto, 1971; Näätänen et al., 1997; Kuhl, 2000). Here, we show that phonological grammar has an equally potent effect. We build on previous research, which shows that stimuli that are phonologically ungrammatical are assimilated to the closest grammatical form in the language (Dupoux et al., 1999). In a cross-linguistic design using French and Japanese participants and a fast event-related functional magnetic resonance imaging (fMRI) paradigm, we show that phonological grammar involves the left superior temporal and the left anterior supramarginal gyri, two regions previously associated with the processing of human vocal sounds.
Collapse
|
240
|
Maestú F, Simos PG, Campo P, Fernández A, Amo C, Paul N, González-Marqués J, Ortiz T. Modulation of brain magnetic activity by different verbal learning strategies. Neuroimage 2003; 20:1110-21. [PMID: 14568480 DOI: 10.1016/s1053-8119(03)00309-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2002] [Revised: 04/16/2003] [Accepted: 05/14/2003] [Indexed: 10/27/2022] Open
Abstract
In this study we examined spatiotemporal profiles of brain activity in the context of tasks designed to engage different verbal learning strategies (serial order, phonological, and semantic). The profile of activation associated with the serial-order strategy, which resulted in poor recall performance, featured early activation of the inferior frontal, sensorimotor, and insular region in the left hemisphere, between 200 and 400 ms after stimulus onset. Subsequently, activation was more prominent in dorsolateral prefrontal cortices bilaterally. In contrast, activation profiles associated with the phonological strategy featured predominantly activation of the superior temporal gyrus in the left hemisphere between 500 and 600 ms. Predominant activation of the left middle temporal gyrus, between 500 and 700 ms, was the key feature of the activation profile observed when the semantic elaboration strategy was utilized. These results suggest that different brain circuits are engaged to support learning of new verbal information as a function of the level and type of initial processing applied to the stimuli.
Collapse
Affiliation(s)
- Fernando Maestú
- Centro de Magnetoencefalografia Dr Pérez Modrego, Universidad Complutense Madrid, Madrid, Spain
| | | | | | | | | | | | | | | |
Collapse
|
241
|
Kotz SA, Meyer M, Alter K, Besson M, von Cramon DY, Friederici AD. On the lateralization of emotional prosody: an event-related functional MR investigation. BRAIN AND LANGUAGE 2003; 86:366-376. [PMID: 12972367 DOI: 10.1016/s0093-934x(02)00532-1] [Citation(s) in RCA: 205] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In order to investigate the lateralization of emotional speech we recorded the brain responses to three emotional intonations in two conditions, i.e., "normal" speech and "prosodic" speech (i.e., speech with no linguistic meaning, but retaining the 'slow prosodic modulations' of speech). Participants listened to semantically neutral sentences spoken with a positive, neutral, or negative intonation in both conditions and judged how positive, negative, or neutral the intonation was on a five-point scale. Core peri-sylvian language areas, as well as some frontal and subcortical areas were activated bilaterally in the normal speech condition. In contrast, a bilateral fronto-opercular region was active when participants listened to prosodic speech. Positive and negative intonations elicited a bilateral fronto-temporal and subcortical pattern in the normal speech condition, and more frontal activation in the prosodic speech condition. The current results call into question an exclusive right hemisphere lateralization of emotional prosody and expand patient data on the functional role of the basal ganglia during the perception of emotional prosody.
Collapse
Affiliation(s)
- Sonja A Kotz
- Max-Planck-Institute of Cognitive Neuroscience, Stephanstrasse 1a, P.O. Box 500355, Leipzig D-04317, Germany.
| | | | | | | | | | | |
Collapse
|
242
|
Abstract
Using fMRI, we sought to determine whether the posterior, superior portion of Broca's area performs operations on phoneme segments specifically or implements processes general to sequencing discrete units. Twelve healthy volunteers performed two sequence manipulation tasks and one matching task, using strings of syllables and hummed notes. The posterior portion of Broca's area responded specifically to the sequence manipulation tasks, independent of whether the stimuli were composed of phonemes or hummed notes. In contrast, the left supramarginal gyrus was somewhat more specific to sequencing phoneme segments. These results suggest a functional dissociation of the canonical left hemisphere language regions encompassing the "phonological loop," with the left posterior inferior frontal gyrus responding not to the sound structure of language but rather to sequential operations that may underlie the ability to form words out of dissociable elements.
Collapse
Affiliation(s)
- Jenna R Gelfand
- Brain Mapping Center, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | | |
Collapse
|
243
|
Crinion JT, Lambon-Ralph MA, Warburton EA, Howard D, Wise RJS. Temporal lobe regions engaged during normal speech comprehension. Brain 2003; 126:1193-201. [PMID: 12690058 DOI: 10.1093/brain/awg104] [Citation(s) in RCA: 199] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Processing of speech is obligatory. Thus, during normal speech comprehension, the listener is aware of the overall meaning of the speaker's utterance without the need to direct attention to individual linguistic and paralinguistic (intonational, prosodic, etc.) features contained within the speech signal. However, most functional neuroimaging studies of speech perception have used metalinguistic tasks that required the subjects to attend to specific features of the stimuli. Such tasks have demanded a forced-choice decision and a motor response from the subjects, which will engage frontal systems and may include unpredictable top-down modulation of the signals observed in one or more of the temporal lobe neural systems engaged during speech perception. This study contrasted the implicit comprehension of simple narrative speech with listening to reversed versions of the narratives: the latter are as acoustically complex as speech but are unintelligible in terms of both linguistic and paralinguistic information. The result demonstrated that normal comprehension, free of task demands that do not form part of everyday discourse, engages regions distributed between the two temporal lobes, more widely on the left. In particular, comprehension is dependent on anterolateral and ventral left temporal regions, as suggested by observations on patients with semantic dementia, as well as posterior regions described in studies on aphasic stroke patients. The only frontal contribution was confined to the ventrolateral left prefrontal cortex, compatible with observations that comprehension of simple speech is preserved in patients with left posterior frontal infarction.
Collapse
Affiliation(s)
- Jennifer T Crinion
- MRC Clinical Sciences Centre, Cyclotron Unit, Hammersmith Hospital, London, UK.
| | | | | | | | | |
Collapse
|
244
|
Joanisse MF, Gati JS. Overlapping neural regions for processing rapid temporal cues in speech and nonspeech signals. Neuroimage 2003; 19:64-79. [PMID: 12781727 DOI: 10.1016/s1053-8119(03)00046-6] [Citation(s) in RCA: 82] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
Speech perception involves recovering the phonetic form of speech from a dynamic auditory signal containing both time-varying and steady-state cues. We examined the roles of inferior frontal and superior temporal cortex in processing these aspects of auditory speech and nonspeech signals. Event-related functional magnetic resonance imaging was used to record activation in superior temporal gyrus (STG) and inferior frontal gyrus (IFG) while participants discriminated pairs of either speech syllables or nonspeech tones. Speech stimuli differed in either the consonant or the vowel portion of the syllable, whereas the nonspeech signals consisted of sinewave tones differing along either a dynamic or a spectral dimension. Analyses failed to identify regions of activation that clearly contrasted the speech and nonspeech conditions. However, we did identify regions in the posterior portion of left and right STG and left IFG yielding greater activation for both speech and nonspeech conditions that involved rapid temporal discrimination, compared to speech and nonspeech conditions involving spectral discrimination. The results suggest that, when semantic and lexical factors are adequately ruled out, there is significant overlap in the brain regions involved in processing the rapid temporal characteristics of both speech and nonspeech signals.
Collapse
Affiliation(s)
- Marc F Joanisse
- Department of Psychology, University of Western Ontario, London, Canada.
| | | |
Collapse
|
245
|
Heim S, Opitz B, Müller K, Friederici AD. Phonological processing during language production: fMRI evidence for a shared production-comprehension network. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2003; 16:285-96. [PMID: 12668238 DOI: 10.1016/s0926-6410(02)00284-7] [Citation(s) in RCA: 94] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Studies of phonological processes during language comprehension consistently report activation of the superior portion of Broca's area. In the domain of language production, however, there is no unequivocal evidence for the contribution of Broca's area to phonological processing. The present event-related fMRI study investigated the existence of a common neural network for phonological decisions in comprehension and production by using production tasks most comparable to those previously used in comprehension. Subjects performed two decision tasks on the initial phoneme of German picture names (/b/ or not? Vowel or not?). A semantic decision task served as a baseline for both phonological tasks. The contrasts between each phonological task and the semantic task were calculated, and a conjunction analysis was performed. There was significant activation in the superior portion of Broca's area (Brodmann's area (BA) 44) in the conjunction analysis, also present in each single contrast. In addition, further left frontal (BA 45/46) and temporal (posterior superior temporal gyrus) areas known to support phonological processing in both production and comprehension were activated. The results suggest the existence of a shared fronto-temporal neural network engaged in the processing of phonological information in both perception and production.
Collapse
Affiliation(s)
- St Heim
- Max Planck Institute of Cognitive Neuroscience, PO Box 500 355, 04303 Leipzig, Germany.
| | | | | | | |
Collapse
|
246
|
Gandour J, Dzemidzic M, Wong D, Lowe M, Tong Y, Hsieh L, Satthamnuwong N, Lurito J. Temporal integration of speech prosody is shaped by language experience: an fMRI study. BRAIN AND LANGUAGE 2003; 84:318-336. [PMID: 12662974 DOI: 10.1016/s0093-934x(02)00505-9] [Citation(s) in RCA: 84] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Differences in hemispheric functions underlying speech perception may be related to the size of temporal integration windows over which prosodic features (e.g., pitch) span in the speech signal. Chinese tone and intonation, both signaled by variations in pitch contours, span over shorter (local) and longer (global) temporal domains, respectively. This cross-linguistic (Chinese and English) study uses functional magnetic resonance imaging to show that pitch contours associated with tones are processed in the left hemisphere by Chinese listeners only, whereas pitch contours associated with intonation are processed predominantly in the right hemisphere. These findings argue against the view that all aspects of speech prosody are lateralized to the right hemisphere, and promote the idea that varying-sized temporal integration windows reflect a neurobiological adaptation to meet the 'prosodic needs' of a particular language.
Collapse
Affiliation(s)
- Jack Gandour
- Department of Audiology and Speech Sciences, Purdue University, Heavilon Hall, West Lafayette, IN 47907-1353, USA.
| | | | | | | | | | | | | | | |
Collapse
|
247
|
Abstract
A striking property of speech perception is its resilience in the face of acoustic variability (among speech sounds produced by different speakers at different times, for example). The robustness of speech perception might, in part, result from multiple, complementary representations of the input, which operate in both acoustic-phonetic feature-based and articulatory-gestural domains. Recent studies of the anatomical and functional organization of the non-human primate auditory cortical system point to multiple, parallel, hierarchically organized processing pathways that involve the temporal, parietal and frontal cortices. Functional neuroimaging evidence indicates that a similar organization might underlie speech perception in humans. These parallel, hierarchical processing 'streams', both within and across hemispheres, might operate on distinguishable, complementary types of representations and subserve complementary types of processing. Two long-opposing views of speech perception have posited a basis either in acoustic feature processing or in gestural motor processing; the view put forward here might help reconcile these positions.
Collapse
Affiliation(s)
- Sophie K Scott
- Department of Psychology, University College London, Gower Street, UK.
| | | |
Collapse
|
248
|
Papanicolaou AC, Castillo E, Breier JI, Davis RN, Simos PG, Diehl RL. Differential brain activation patterns during perception of voice and tone onset time series: a MEG study. Neuroimage 2003; 18:448-59. [PMID: 12595198 DOI: 10.1016/s1053-8119(02)00020-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
Evoked magnetic fields were recorded from 18 adult volunteers using magnetoencephalography (MEG) during perception of speech stimuli (the endpoints of a voice onset time (VOT) series ranging from /ga/ to /ka/), analogous nonspeech stimuli (the endpoints of a two-tone series varying in relative tone onset time (TOT), and a set of harmonically complex tones varying in pitch. During the early time window (approximately 60 to approximately 130 ms post-stimulus onset), activation of the primary auditory cortex was bilaterally equal in strength for all three tasks. During the middle (approximately 130 to 800 ms) and late (800 to 1400 ms) time windows of the VOT task, activation of the posterior portion of the superior temporal gyrus (STGp) was greater in the left hemisphere than in the right hemisphere, in both group and individual data. These asymmetries were not evident in response to the nonspeech stimuli. Hemispheric asymmetries in a measure of neurophysiological activity in STGp, which includes the supratemporal plane and cortex inside the superior temporal sulcus, may reflect a specialization of association auditory cortex in the left hemisphere for processing speech sounds. Differences in late activation patterns potentially reflect the operation of a postperceptual process (e.g., rehearsal in working memory) that is restricted to speech stimuli.
Collapse
Affiliation(s)
- Andrew C Papanicolaou
- Vivian L. Smith Center for Neurologic Research, Department of Neurosurgery, University of Texas-Houston Medical School, Houston, TX 77030, USA.
| | | | | | | | | | | |
Collapse
|
249
|
Abstract
Functional neuroimaging of language builds on almost 150 years of study in neurology, psychology, linguistics, anatomy, and physiology. In recent years, there has been an explosion of research using functional imaging technology, especially positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), to understand the relationship between brain mechanisms and language processing. These methods combine high-resolution anatomic images with measures of language-specific brain activity to reveal neural correlates of language processing. This article reviews some of what has been learned about the neuroanatomy of language from these imaging techniques. We first discuss the normal case, organizing the presentation according to the levels of language, encompassing words (lexicon), sound structure (phonemes), and sentences (syntax and semantics). Next, we delve into some unusual language processing circumstances, including second languages and sign languages. Finally, we discuss abnormal language processing, including developmental and acquired dyslexia and aphasia.
Collapse
Affiliation(s)
- Steven L Small
- Department of Neurology, Brain Research Imaging Center, University of Chicago, 5841 South Maryland Avenue, MC-2030, Chicago, IL 60637, USA.
| | | |
Collapse
|
250
|
Bookheimer S. Functional MRI of language: new approaches to understanding the cortical organization of semantic processing. Annu Rev Neurosci 2002; 25:151-88. [PMID: 12052907 DOI: 10.1146/annurev.neuro.25.112701.142946] [Citation(s) in RCA: 904] [Impact Index Per Article: 41.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Until recently, our understanding of how language is organized in the brain depended on analysis of behavioral deficits in patients with fortuitously placed lesions. The availability of functional magnetic resonance imaging (fMRI) for in vivo analysis of the normal brain has revolutionized the study of language. This review discusses three lines of fMRI research into how the semantic system is organized in the adult brain. These are (a) the role of the left inferior frontal lobe in semantic processing and dissociations from other frontal lobe language functions, (b) the organization of categories of objects and concepts in the temporal lobe, and (c) the role of the right hemisphere in comprehending contextual and figurative meaning. Together, these lines of research broaden our understanding of how the brain stores, retrieves, and makes sense of semantic information, and they challenge some commonly held notions of functional modularity in the language system.
Collapse
|