1
|
In Spoken Word Recognition, the Future Predicts the Past. J Neurosci 2018; 38:7585-7599. [PMID: 30012695 DOI: 10.1523/jneurosci.0065-18.2018] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 06/06/2018] [Accepted: 07/09/2018] [Indexed: 11/21/2022] Open
Abstract
Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supporting the integration of subsequent context remain unknown. Using MEG to record from human auditory cortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ∼450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.SIGNIFICANCE STATEMENT The perception of a speech sound is determined by its surrounding context in the form of words, sentences, and other speech sounds. Often, such contextual information becomes available later than the sensory input. The present study is the first to unveil how the brain uses this subsequent information to aid speech comprehension. Concretely, we found that the auditory system actively maintains the acoustic signal in auditory cortex while concurrently making guesses about the identity of the words being said. Such a processing strategy allows the content of the message to be accessed quickly while also permitting reanalysis of the acoustic signal to minimize parsing mistakes.
Collapse
|
2
|
Altmann CF, Uesaki M, Ono K, Matsuhashi M, Mima T, Fukuyama H. Categorical speech perception during active discrimination of consonants and vowels. Neuropsychologia 2014; 64:13-23. [DOI: 10.1016/j.neuropsychologia.2014.09.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2014] [Revised: 08/21/2014] [Accepted: 09/03/2014] [Indexed: 10/24/2022]
|
3
|
Shuai L, Gong T. Temporal relation between top-down and bottom-up processing in lexical tone perception. Front Behav Neurosci 2014; 8:97. [PMID: 24723863 PMCID: PMC3971173 DOI: 10.3389/fnbeh.2014.00097] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2013] [Accepted: 03/09/2014] [Indexed: 12/03/2022] Open
Abstract
Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300–500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions.
Collapse
Affiliation(s)
- Lan Shuai
- Department of Electrical and Computer Engineering, Johns Hopkins University Baltimore, MD, USA
| | - Tao Gong
- Department of Linguistics, University of Hong Kong Hong Kong, China
| |
Collapse
|
4
|
Spatiotemporal and frequency signatures of word recognition in the developing brain: a magnetoencephalographic study. Brain Res 2013; 1498:20-32. [PMID: 23313876 DOI: 10.1016/j.brainres.2013.01.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2012] [Revised: 12/31/2012] [Accepted: 01/02/2013] [Indexed: 11/20/2022]
Abstract
High-frequency oscillations in the brain open a new window for studies of language development in humans. The objective of this study is to determine the spatiotemporal and frequency signatures of word processing in healthy children. Sixty healthy children aged 6-17 years were studied with a whole-cortex magnetoencephalography (MEG) system using a word recognition paradigm optimized for children. The temporal signature of neuromagnetic activation was measured using averaged waveforms. The spatial and frequency signatures of neuromagnetic activation were assessed with wavelet-based beamformer analyses. The results of waveform analyses showed that the latencies of the first and third neuromagnetic responses changed with age (p<0.01). The source imaging data revealed a clear lateralization of source activation in the 70-120 Hz range in children within the age range of 6 to 13 years of age (p<0.01). Males and females demonstrated different developmental trajectories over the age range of 9 to 13 years of age (p<0.01). These findings suggest that left-hemisphere language processing emerges from early bilateral brain areas with gender optimal neural networks. The neuromagnetic signatures of language development in healthy children may be used as references for future identification of aberrant language function in children with various disorders.
Collapse
|
5
|
Hertrich I, Dietrich S, Ackermann H. Tracking the speech signal--time-locked MEG signals during perception of ultra-fast and moderately fast speech in blind and in sighted listeners. BRAIN AND LANGUAGE 2013; 124:9-21. [PMID: 23332808 DOI: 10.1016/j.bandl.2012.10.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Revised: 09/28/2012] [Accepted: 10/15/2012] [Indexed: 06/01/2023]
Abstract
Blind people can learn to understand speech at ultra-high syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. To further elucidate the neural mechanisms underlying this skill, magnetoencephalographic (MEG) measurements during listening to sentence utterances were cross-correlated with time courses derived from the speech signal (envelope, syllable onsets and pitch periodicity) to capture phase-locked MEG components (14 blind, 12 sighted subjects; speech rate=8 or 16 syllables/s, pre-defined source regions: auditory and visual cortex, inferior frontal gyrus). Blind individuals showed stronger phase locking in auditory cortex than sighted controls, and right-hemisphere visual cortex activity correlated with syllable onsets in case of ultra-fast speech. Furthermore, inferior-frontal MEG components time-locked to pitch periodicity displayed opposite lateralization effects in sighted (towards right hemisphere) and blind subjects (left). Thus, ultra-fast speech comprehension in blind individuals appears associated with changes in early signal-related processing mechanisms both within and outside the central-auditory terrain.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | | | | |
Collapse
|
6
|
Tomaschek F, Truckenbrodt H, Hertrich I. Neural processing of acoustic duration and phonological German vowel length: time courses of evoked fields in response to speech and nonspeech signals. BRAIN AND LANGUAGE 2013; 124:117-131. [PMID: 23314420 DOI: 10.1016/j.bandl.2012.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 11/16/2012] [Accepted: 11/17/2012] [Indexed: 06/01/2023]
Abstract
Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG. Using disyllabic words with varying /a/-durations and temporally-matched nonspeech stimuli, we found that each syllable elicited an M50/M100-complex. The M50-amplitude to the second syllable varied along the durational continuum, possibly reflecting the mapping of duration onto a rhythm representation. Categorical length was reflected by an additional response elicited when vowel duration exceeded the short-long boundary. This was interpreted to reflect the integration of an additional timing unit for long in contrast to short vowels. Unlike to speech, responses to short nonspeech durations lacked a M100 to the first and M50 to the second syllable, indicating different integration windows for speech and nonspeech signals.
Collapse
Affiliation(s)
- Fabian Tomaschek
- Hertie Institute for Clinical Brain Research, Department of General Neurology, University of Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
7
|
Kanwal JS. Right-left asymmetry in the cortical processing of sounds for social communication vs. navigation in mustached bats. Eur J Neurosci 2011; 35:257-70. [DOI: 10.1111/j.1460-9568.2011.07951.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
8
|
Hertrich I, Dietrich S, Trouvain J, Moos A, Ackermann H. Magnetic brain activity phase-locked to the envelope, the syllable onsets, and the fundamental frequency of a perceived speech signal. Psychophysiology 2011; 49:322-34. [PMID: 22175821 DOI: 10.1111/j.1469-8986.2011.01314.x] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2011] [Accepted: 09/08/2011] [Indexed: 11/27/2022]
Abstract
During speech perception, acoustic correlates of syllable structure and pitch periodicity are directly reflected in electrophysiological brain activity. Magnetoencephalography (MEG) recordings were made while 10 participants listened to natural or formant-synthesized speech at moderately fast or ultrafast rate. Cross-correlation analysis was applied to show brain activity time-locked to the speech envelope, to an acoustic marker of syllable onsets, and to pitch periodicity. The envelope yielded a right-lateralized M100-like response, syllable onsets gave rise to M50/M100-like fields with an additional anterior M50 component, and pitch (ca. 100 Hz) elicited a neural resonance bound to a central auditory source at a latency of 30 ms. The strength of these MEG components showed differential effects of syllable rate and natural versus synthetic speech. Presumingly, such phase-locking mechanisms serve as neuronal triggers for the extraction of information-bearing elements.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.
| | | | | | | | | |
Collapse
|
9
|
Fujioka T, Zendel BR, Ross B. Endogenous neuromagnetic activity for mental hierarchy of timing. J Neurosci 2010; 30:3458-66. [PMID: 20203205 PMCID: PMC6634108 DOI: 10.1523/jneurosci.3086-09.2010] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2009] [Revised: 12/19/2009] [Accepted: 01/23/2010] [Indexed: 11/21/2022] Open
Abstract
The frontal-striatal circuits, the cerebellum, and motor cortices play crucial roles in processing timing information on second to millisecond scales. However, little is known about the physiological mechanism underlying human's preference to robustly encode a sequence of time intervals into a mental hierarchy of temporal units called meter. This is especially salient in music: temporal patterns are typically interpreted as integer multiples of a basic unit (i.e., the beat) and accommodated into a global context such as march or waltz. With magnetoencephalography and spatial-filtering source analysis, we demonstrated that the time courses of neural activities index a subjectively induced meter context. Auditory evoked responses from hippocampus, basal ganglia, and auditory and association cortices showed a significant contrast between march and waltz metric conditions during listening to identical click stimuli. Specifically, the right hippocampus was activated differentially at 80 ms to the march downbeat (the count one) and approximately 250 ms to the waltz downbeat. In contrast, basal ganglia showed a larger 80 ms peak for march downbeat than waltz. The metric contrast was also expressed in long-latency responses in the right temporal lobe. These findings suggest that anticipatory processes in the hippocampal memory system and temporal computation mechanism in the basal ganglia circuits facilitate endogenous activities in auditory and association cortices through feedback loops. The close interaction of auditory, motor, and limbic systems suggests a distributed network for metric organization in temporal processing and its relevance for musical behavior.
Collapse
|
10
|
Objective phonological and subjective perceptual characteristics of syllables modulate spatiotemporal patterns of superior temporal gyrus activity. Neuroimage 2008; 40:1888-901. [PMID: 18356082 DOI: 10.1016/j.neuroimage.2008.01.048] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2007] [Revised: 01/23/2008] [Accepted: 01/24/2008] [Indexed: 11/23/2022] Open
Abstract
Natural consonant-vowel syllables are reliably classified by most listeners as voiced or voiceless. However, our previous research [Liederman, J., Frye, R., Fisher, J.M., Greenwood, K., Alexander, R., 2005. A temporally dynamic context effect that disrupts voice onset time discrimination of rapidly successive stimuli. Psychon Bull Rev. 12, 380-386] suggests that among synthetic stimuli varying systematically in voice onset time (VOT), syllables that are classified reliably as voiceless are nonetheless perceived differently within and between listeners. This perceptual ambiguity was measured by variation in the accuracy of matching two identical stimuli presented in rapid succession. In the current experiment, we used magnetoencephalography (MEG) to examine the differential contribution of objective (i.e., VOT) and subjective (i.e., perceptual ambiguity) acoustic features on speech processing. Distributed source models estimated cortical activation within two regions of interest in the superior temporal gyrus (STG) and one in the inferior frontal gyrus. These regions were differentially modulated by VOT and perceptual ambiguity. Ambiguity strongly influenced lateralization of activation; however, the influence on lateralization was different in the anterior and middle/posterior portions of the STG. The influence of ambiguity on the relative amplitude of activity in the right and left anterior STG activity depended on VOT, whereas that of middle/posterior portions of the STG did not. These data support the idea that early cortical responses are bilaterally distributed whereas late processes are lateralized to the dominant hemisphere and support a "how/what" dual-stream auditory model. This study helps to clarify the role of the anterior STG, especially in the right hemisphere, in syllable perception. Moreover, our results demonstrate that both objective phonological and subjective perceptual characteristics of syllables independently modulate spatiotemporal patterns of cortical activation.
Collapse
|
11
|
Abstract
Voice onset time (VOT) provides an important auditory cue for recognizing spoken consonant-vowel syllables. Although changes in the neuromagnetic response to consonant-vowel syllables with different VOT have been examined, such experiments have only manipulated VOT with respect to voicing. We utilized the characteristics of a previously developed asymmetric VOT continuum [Liederman, J., Frye, R. E., McGraw Fisher, J., Greenwood, K., & Alexander, R. A temporally dynamic contextual effect that disrupts voice onset time discrimination of rapidly successive stimuli. Psychonomic Bulletin and Review, 12, 380-386, 2005] to determine if changes in the prominent M100 neuromagnetic response were linearly modulated by VOT. Eight right-handed, English-speaking, normally developing participants performed a VOT discrimination task during a whole-head neuromagnetic recording. The M100 was identified in the gradiometers overlying the right and left temporal cortices and single dipoles were fit to each M100 waveform. A repeated measures analysis of variance with post hoc contrast test for linear trend was used to determine whether characteristics of the M100 were linearly modulated by VOT. The morphology of the M100 gradiometer waveform and the peak latency of the dipole waveform were linearly modulated by VOT. This modulation was much greater in the left, as compared to the right, hemisphere. The M100 dipole moved in a linear fashion as VOT increased in both hemispheres, but along different axes in each hemisphere. This study suggests that VOT may linearly modulate characteristics of the M100, predominately in the left hemisphere, and suggests that the VOT of consonant-vowel syllables, instead of, or in addition to, voicing, should be examined in future experiments.
Collapse
Affiliation(s)
- Richard E Frye
- University of Texas Health Science Center at Houston, TX 77030, USA.
| | | | | | | | | | | |
Collapse
|
12
|
Opitz B, Friederici AD. Neural basis of processing sequential and hierarchical syntactic structures. Hum Brain Mapp 2007; 28:585-92. [PMID: 17455365 PMCID: PMC6871462 DOI: 10.1002/hbm.20287] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The psychological processes through which humans learn a language have gained considerable interest over the past years. It has been previously suggested that language acquisition partly relies on a rule-based mechanism that is mediated by the frontal cortex. Interestingly, the actual structure involved within the frontal cortex varies with the kind of rules being processed. By means of functional MRI we investigated the neural underpinnings of rule-based language processing using an artificial language that allows direct comparisons between local phrase structure dependencies and hierarchically structured long-distance dependencies. Activation in the left ventral premotor cortex (PMC) was related to the local character of rule change, whereas long-distance dependencies activated the opercular part of the inferior frontal gyrus (Broca's area (BA) 44). These results suggest that the brain's involvement in syntactic processing is determined by the type of rule used, with BA 44/45 playing an important role during language processing when long-distance dependencies are processed. In contrast, the ventral PMC seems to subserve the processing of local dependencies. In addition, hippocampal activity was observed for local dependencies, indicating that the processing of such dependencies may be mediated by a second mechanism.
Collapse
Affiliation(s)
- Bertram Opitz
- Department of Psychology, Experimental Neuropsychology Unit, Saarland University, Saarbrücken, Germany.
| | | |
Collapse
|
13
|
Abstract
RATIONALE Our goal was to determine the frequency of repeated intracarotid amobarbital test (IAT) at our center and to estimate the retest reliability of the IAT for both language and memory lateralization. METHODS A total of 1,249 consecutive IATs on 1,190 patients were retrospectively reviewed for repeat tests. RESULTS In 4% of patients the IAT was repeated in order to deliver satisfactory information on either language or memory lateralization. Reasons for repetition included obtundation and inability to test for memory lateralization, inability to test for language lateralization, no hemiparesis during first test, no aphasia during first test, atypical vessel filling, and bleeding complications from the catheter insertion site. Language lateralization was reproduced in all but one patient. Repeated memory test results were less consistent across tests, and memory lateralization was unreliable in 63% of the patients. DISCUSSION In spite of test limitations by a varying dose of amobarbital, crossover of amobarbital from one side to the other, testing of both hemispheres on the same day, practice effects, unblinded observers, fluctuating cooperation of the patients, and a biased sample of patients language lateralization was reproduced in all but one patient. In contrast, repeated memory test results were frequently contradictory. Memory results on IAT therefore seem much less robust than the results of language testing. Gain of reliable information versus the risks of complications and failed tests has to be considered when a patient is subjected to an IAT.
Collapse
Affiliation(s)
- Tobias Loddenkemper
- Department of Neurology, The Cleveland Clinic Foundation, Cleveland, OH 44195, USA.
| | | | | | | |
Collapse
|
14
|
Hertrich I, Mathiak K, Lutzenberger W, Menning H, Ackermann H. Sequential audiovisual interactions during speech perception: A whole-head MEG study. Neuropsychologia 2007; 45:1342-54. [PMID: 17067640 DOI: 10.1016/j.neuropsychologia.2006.09.019] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2006] [Revised: 07/26/2006] [Accepted: 09/26/2006] [Indexed: 11/21/2022]
Abstract
Using whole-head magnetoencephalography (MEG), audiovisual (AV) interactions during speech perception (/ta/- and /pa/-syllables) were investigated in 20 subjects. Congruent AV events served as the 'standards' of an oddball design. The deviants encompassed incongruent /ta/-/pa/ configurations differing from the standards either in the acoustic or the visual domain. As an auditory non-speech control condition, the same video signals were synchronized with either one of two complex tones. As in natural speech, visual movement onset preceded acoustic signals by about 150 ms. First, the impact of visual information on auditorily evoked fields to non-speech sounds was determined. Larger facial movements (/pa/ versus /ta/) yielded enhanced early responses such as the M100 component, indicating, most presumably, anticipatory pre-activation of auditory cortex by visual motion cues. As a second step of analysis, mismatch fields (MMF) were calculated. Acoustic deviants elicited a typical MMF, peaking ca. 180 ms after stimulus onset, whereas visual deviants gave rise to later responses (220 ms) of a more posterior-medial source location. Finally, a late (275 ms), left-lateralized visually-induced MMF component, resembling the acoustic mismatch response, emerged during the speech condition, presumably reflecting phonetic/linguistic operations. There is mounting functional imaging evidence for an early impact of visual information on auditory cortical regions during speech perception. The present study suggests at least two successive AV interactions in association with syllable recognition tasks: early activation of auditory areas depending upon visual motion cues and a later speech-specific left-lateralized response mediated, conceivably, by backward-projections from multisensory areas.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | | | | | | | | |
Collapse
|
15
|
Trébuchon-Da Fonseca A, Giraud K, Badier JM, Chauvel P, Liégeois-Chauvel C. Hemispheric lateralization of voice onset time (VOT) comparison between depth and scalp EEG recordings. Neuroimage 2005; 27:1-14. [PMID: 15896982 DOI: 10.1016/j.neuroimage.2004.12.064] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2004] [Revised: 10/15/2004] [Accepted: 12/09/2004] [Indexed: 11/28/2022] Open
Abstract
Auditory-evoked potential (AEP)s elicited to French-language voiced stop consonant (/ba/) and voiceless stop consonant (/pa/) were studied in non-language-impaired epileptic patients and non-epileptic volunteers. First, depth AEPs recorded from the primary auditory cortex during pre-surgical exploration and scalp AEPs recordings using high resolution EEG (HR EEG-64 channels scalp EEG) were compared in the same patients. Both methods indicated that the processing of voiced and voiceless consonants was based on a temporal auditory coding. /Ba/ elicited a first complex (N1) at the onset of voicing and a second component [release component (RC)] time-locked to release. This processing took place specifically in the left primary auditory cortex. Source modeling of the RC showed that a left-greater-than-right amplitude of source probes (SP) both in epileptic patients with left-hemispheric language dominance [established by means of invasive tests (WADA test) and/or clinical data] and right-handed non-epileptic subjects. Our data suggest that the processing of VOT is related to hemispheric dominance for language and that scalp-recorded AEPs may represent an effective, non-invasive method to establish hemispheric dominance for language in clinical settings. This procedure could complement existing methods and could help to detect the dissociation between receptive and expressive language sometimes observed in patients with epilepsy.
Collapse
Affiliation(s)
- Agnés Trébuchon-Da Fonseca
- Laboratoire de Neurophysiologie et Neuropsychologie, INSERM EMI 9926, Faculte de Medecine, Universite de la Mediterranee, 13385 Marseille Cedex 5, France.
| | | | | | | | | |
Collapse
|
16
|
Giraud K, Démonet JF, Habib M, Marquis P, Chauvel P, Liégeois-Chauvel C. Auditory Evoked Potential Patterns to Voiced and Voiceless Speech Sounds in Adult Developmental Dyslexics with Persistent Deficits. Cereb Cortex 2005; 15:1524-34. [PMID: 15689520 DOI: 10.1093/cercor/bhi031] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Auditory evoked potentials (AEPs) were recorded from eight developmental dyslexic adults with persistent reading, spelling and phonological deficits, and 10 non-dyslexic controls to voiced (/ba/) and voiceless (/pa/) consonant-vowel syllables. Consistent with previous data, non-dyslexics coded these stimuli differentially according to the temporal cues that form the basis of the voiced/voiceless contrast: AEPs had time-locked components with latencies that were determined by the temporal structure of the stimuli. Dyslexics were characterized by one of two electrophysiological patterns: AEP pattern I dyslexics demonstrated a differential coding of stimuli on the basis of some temporal cues, but with an atypically large number of components and a considerable delay in AEP termination time; AEP pattern II dyslexics demonstrated no clear differential coding of stimuli on the basis of temporal cues. These data reveal the presence of anomalies in cortical auditory processing which could underlie persistent perceptual and linguistic impairments in some developmental dyslexics. Furthermore, scalp AEP distribution maps showing the difference observed between /ba/ and /pa/ activity over time suggest that the regions implicated in the processing of crucial time-related acoustic cues were not systematically lateralized to the left hemisphere like they were for non-dyslexics. These findings may be conducive to a better understanding and treatment of perceptual dysfunctions in developmental language disorders.
Collapse
Affiliation(s)
- K Giraud
- INSERM EMI-U 9926, Faculté de Médecine, Marseilles, France
| | | | | | | | | | | |
Collapse
|
17
|
Abstract
Time is a fundamental dimension of behavior and as such underlies the perception and production of speech. This paper reviews patient and neuroimaging studies that investigated brain structures that support temporal aspects of speech. The left-frontal cortex, the basal ganglia, and the cerebellum represent structures that have been implicated repeatedly. A comparison with the structures involved in the timing of non-speech events (e.g., tones, lights, finger movements) suggests both commonalities and differences: while the basal ganglia and the cerebellum contribute to the timing of speech and non-speech events, the contribution of left-frontal cortex seems to be specific to speech or rapidly changing acoustic information. Motivated by these commonalities and differences, this paper presents assumptions about the function of basal ganglia, cerebellum, and cortex in the timing of speech.
Collapse
Affiliation(s)
- Annett Schirmer
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany.
| |
Collapse
|
18
|
Roman S, Canévet G, Lorenzi C, Triglia JM, Liégeois-Chauvel C. Voice onset time encoding in patients with left and right cochlear implants. Neuroreport 2004; 15:601-5. [PMID: 15094460 DOI: 10.1097/00001756-200403220-00006] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Stop-consonant discrimination was investigated in normal-hearing listeners and cochlear-implanted patients (CIP) by recording auditory evoked potentials (AEPs) to /b epsilon/ and /p epsilon/ syllables. This study demonstrates that: (i) AEPs show time-locked components that mimic the temporal structure of the stimuli, indicating that both patients and control subjects encode those syllables according to the temporal cue (voice onset time) characterizing the voiced/voiceless contrast; (ii) the side of implantation does not affect the general structure of AEPs and /b epsilon/-/p epsilon/ discrimination thresholds (measured separately with a psychophysical procedure); (iii) poor time-locking to the syllables' temporal structure is associated with poor discrimination. This suggests that EEG investigation of temporal-processing provides an objective index of speech perception in CIP and could be used in implanted children.
Collapse
Affiliation(s)
- Stéphane Roman
- Laboratoire d'Audio-Phonologie Clinique, Centre Hospitalier Universitaire de La Timone, F-13385 Marseille Cedex 5, France
| | | | | | | | | |
Collapse
|
19
|
Cremades J, Barreto A, Sanchez D, Adjouadi M. Human–computer interfaces with regional lower and upper alpha frequencies as on-line indexes of mental activity. COMPUTERS IN HUMAN BEHAVIOR 2004. [DOI: 10.1016/j.chb.2003.09.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
20
|
Mathiak K, Hertrich I, Grodd W, Ackermann H. Discrimination of temporal information at the cerebellum: functional magnetic resonance imaging of nonverbal auditory memory. Neuroimage 2004; 21:154-62. [PMID: 14741652 DOI: 10.1016/j.neuroimage.2003.09.036] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
Until recently, the cerebellum was held to play its chief role in motor control. By contrast, Keele and Ivry (1990) proposed that it may subserve time estimation within the perceptual domain as well. In accordance with this suggestion, speech perception requiring minute differentiation of time intervals was found compromised by cerebellar pathology a subsequent functional magnetic resonance imaging (fMRI) study found hemodynamic activation of the right neocerebellum under these conditions. In the current fMRI investigation a non-speech task involving duration storage and comparison yielded significant hemodynamic responses within the lateral Crus I area of the right cerebellar hemisphere. Concomitantly, a left prefrontal cluster was observed. The present fMRI study employed single-shot double-echo echo-planar imaging (EPI) to reduce image distortion and acquisition time with whole-brain coverage (TE = 28 and 66 ms, TR = 5 s, 28 slices, TA = 2.8 s). Twelve healthy subjects performed two tasks: identifying pauses between tones as "short" or "long" (30-130 ms) and deciding which of two successive pauses was longer. The activation pattern in the discrimination task was analogous to that seen during speech perception and verbal working memory (WM) tasks. We suggest that the storage of precise temporal structures relies on a cerebellar-prefrontal loop. This network allows for temporal organization of verbal sequences and phoneme encoding based on durational operations in a linguistic context.
Collapse
Affiliation(s)
- Klaus Mathiak
- Department of Neurology, University of Tübingen, D-72076, Tübingen, Germany.
| | | | | | | |
Collapse
|
21
|
Kaiser J, Ripper B, Birbaumer N, Lutzenberger W. Dynamics of gamma-band activity in human magnetoencephalogram during auditory pattern working memory. Neuroimage 2003; 20:816-27. [PMID: 14568454 DOI: 10.1016/s1053-8119(03)00350-1] [Citation(s) in RCA: 116] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2003] [Revised: 06/03/2003] [Accepted: 06/03/2003] [Indexed: 11/30/2022] Open
Abstract
Both electrophysiological research in animals and human brain imaging studies have suggested that, similar to the visual system, separate cortical ventral "what" and dorsal "where" processing streams may also exist in the auditory domain. Recently we have shown enhanced gamma-band activity (GBA) over posterior parietal cortex belonging to the putative auditory dorsal pathway during a sound location working memory task. Using a similar methodological approach, the present study assessed whether GBA would be increased over auditory ventral stream areas during an auditory pattern memory task. Whole-head magnetoencephalogram was recorded from N = 12 subjects while they performed a working memory task requiring same-different judgments about pairs of syllables S1 and S2 presented with 0.8-s delays. S1 and S2 could differ either in voice onset time or in formant structure. This was compared with a control task involving the detection of possible spatial displacements in the background sound presented instead of S2. Under the memory condition, induced GBA was enhanced over left inferior frontal/anterior temporal regions during the delay phase and in response to S2 and over prefrontal cortex at the end of the delay period. gamma-Band coherence between left frontotemporal and prefrontal sensors was increased throughout the delay period of the memory task. In summary, the memorization of syllables was associated with synchronously oscillating networks both in frontotemporal cortex, supporting a role of these areas as parts of the putative auditory ventral stream, and in prefrontal, possible executive regions. Moreover, corticocortical connectivity was increased between these structures.
Collapse
Affiliation(s)
- Jochen Kaiser
- MEG Center, Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
22
|
Sinai A, Pratt H. High-resolution time course of hemispheric dominance revealed by low-resolution electromagnetic tomography. Clin Neurophysiol 2003; 114:1181-8. [PMID: 12842713 DOI: 10.1016/s1388-2457(03)00087-7] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
OBJECTIVE Auditory event-related brain potentials (ERPs) were recorded during a lexical decision task in response to linguistic and non-linguistic stimuli, to assess the detailed time course of language processing in general, and hemispheric dominance in particular. METHODS Young adults (n=17) were presented with pairs of auditory stimuli consisting of words, pseudowords and words played backwards in a lexical decision task. ERPs were recorded from 21 scalp electrodes. Current densities were calculated using low-resolution electromagnetic tomography (LORETA). Statistic non-parametric maps of activity were derived from the calculated current densities and the number of active brain voxels in the left and right hemispheres was compared throughout the processing of each stimulus. RESULTS Our results show that hemispheric dominance is highly time dependent, alternating between the right and left hemispheres at different times, and that the right hemisphere's role in language processing follows a different time course for first and second language. The time course of hemispheric dominance for non-linguistic stimuli was highly variable. CONCLUSIONS The time course of hemispheric dominance is dynamic, alternating between left and right homologous regions, with different time courses for different stimulus classes.
Collapse
Affiliation(s)
- Alon Sinai
- Evoked Potential Laboratory, Faculty of Medicine, Technion--Israel Institute of Technology, Haifa 32000, Israel.
| | | |
Collapse
|
23
|
Koyama S, Akahane-Yamada R, Gunji A, Kubo R, Roberts TPL, Yabe H, Kakigi R. Cortical evidence of the perceptual backward masking effect on /l/ and /r/ sounds from a following vowel in Japanese speakers. Neuroimage 2003; 18:962-74. [PMID: 12725771 DOI: 10.1016/s1053-8119(03)00037-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
We examined the influence of stimulus duration of foreign consonant vowel stimuli on the MMNm (magnetic counter part of mismatch negativity). In Experiment 1, /ra/ and /la/ stimuli were synthesized and subjects were native Japanese speakers who are known to have difficulty discriminating the stimuli. "Short" duration stimuli were terminated in the middle of the consonant-to-vowel transition (110 ms). They were nevertheless clearly identifiable by English speakers. A clear MMNm was observed only for short-duration stimuli but not for untruncated long-duration (150-ms) stimuli. We suggest that the diminished MMNm for longer duration stimuli result from more effective masking by the longer vowel part. In Experiment 2 we examined this hypothesis by presenting only the third formant (F3) component of the original stimuli, since the acoustic difference between /la/ and /ra/ is most evident in the third formant, whereas F1 and F2 play a major role in vowel perception. If the MMNm effect depends on the acoustic property of F3, a stimulus duration effect comparable to that found with the original /la/ and /ra/ stimuli might be expected. However, if the effect is attributable to the masking effect from the vowel, no influence of stimulus duration would be expected, since neither stimulus contains F1 and F2 components. In fact, the results showed that the "F3 only" stimuli did not show a duration effect; MMNm was always elicited independent of stimulus duration. The MMN stimulus duration effect is thus suggested to come from the backward masking of foreign consonants by subsequent vowels.
Collapse
Affiliation(s)
- Sachiko Koyama
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan.
| | | | | | | | | | | | | |
Collapse
|
24
|
Mathiak K, Hertrich I, Grodd W, Ackermann H. Cerebellum and speech perception: a functional magnetic resonance imaging study. J Cogn Neurosci 2002; 14:902-12. [PMID: 12191457 DOI: 10.1162/089892902760191126] [Citation(s) in RCA: 88] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A variety of data indicate that the cerebellum participates in perceptual tasks requiring the precise representation of temporal information. Access to the word form of a lexical item requires, among other functions, the processing of durational parameters of verbal utterances. Therefore, cerebellar dysfunctions must be expected to impair word recognition. In order to specify the topography of the assumed cerebellar speech perception mechanism, a functional magnetic resonance imaging study was performed using the German lexical items "Boden" ([bodn], Engl. "floor") and "Boten" ([botn], "messengers") as test materials. The contrast in sound structure of these two lexical items can be signaled either by the length of the wordmedial pause (closure time, CLT; an exclusively temporal measure) or by the aspiration noise of wordmedial "d" or "t" (voice onset time, VOT; an intrasegmental cue). A previous study found bilateral cerebellar disorders to compromise word recognition based on CLT whereas the encoding of VOT remained unimpaired. In the present study, two series of "Boden - Boten" utterances were resynthesized, systematically varying either in CLT or VOT. Subjects had to identify both words "Boden" and "Boten" by analysis of either the durational parameter CLT or the VOT aspiration segment. In a subtraction design, CLT categorization as compared to VOT identification (CLT - VOT) yielded a significant hemodynamic response of the right cerebellar hemisphere (neocerebellum Crus I) and the frontal lobe (anterior to Broca's area). The reversed contrast ( VOT - CLT) resulted in a single activation cluster located at the level of the supratemporal plane of the dominant hemisphere. These findings provide first evidence for a distinct contribution of the right cerebellar hemisphere to speech perception in terms of encoding of durational parameters of verbal utterances. Verbal working memory tasks, lexical response selection, and auditory imagery of word strings have been reported to elicit activation clusters of a similar location. Conceivably, representation of the temporal structure of speech sound sequences represents the common denominator of cerebellar participation in cognitive tasks acting on a phonetic code.
Collapse
Affiliation(s)
- Klaus Mathiak
- MEG-Zentrum, University of Tübingen, Otfried-Müller-Strasse 47, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
25
|
Abstract
Recent functional neuroimaging studies have emphasized the role of the different areas within the left superior temporal sulcus (STS) for the perception of various speech stimuli. We report here the results of three independent studies additionally demonstrating hemodynamic responses in the vicinity of the planum temporale (PT). In these studies we used consonant-vowel (CV) syllables, tones, white noise, and vowels as acoustic stimuli in the context of whole-head functional magnetic resonance imaging, applying a long TR to attenuate possible masking effects by the scanner noise. To summarize, we obtained the following results for the contrasts comparing hemodynamic responses obtained during the perception of CV syllables compared to tones or white noise: (i) stronger activation in the vicinity of the left PT with two distinct foci of activation, one in a lateral position and the other more medial in the vicinity of Heschl's sulcus; (ii) stronger activation in the vicinity of the right PT; and (iii) stronger bilateral activation within the mid-STS. Further contrasts revealed the following findings: (iv) stronger bilateral activation to CV syllables than to vowels in the medial PT, (v) stronger left-sided activation to CV syllables than to vowels in the mid-STS, and (vi) stronger activation to CV syllables with voiceless initial consonants than to CV syllables with voiced initial consonants in the left medial PT. The results are compatible with the hypothesis that the STS contains neurons specialized for speech perception. However, these results also emphasize the role of the PT in the analysis of phonetic features, namely the voice-onset-time. Yet this does not mean that the PT is solely specialized for phonetic analysis. We hypothesize rather that the PT contains neurons specialized for the analysis of rapidly changing cues as was suggested by P. Tallal et al. (1993, Ann. N. Y. Acad. Sci. 682: 27-47).
Collapse
Affiliation(s)
- L Jäncke
- Institute of Experimental and General Psychology, Otto-von-Guericke University Magdeburg, D-39106 Magdeburg, Germany.
| | | | | | | |
Collapse
|
26
|
Kaiser J, Lutzenberger W, Ackermann H, Birbaumer N. Dynamics of gamma-band activity induced by auditory pattern changes in humans. Cereb Cortex 2002; 12:212-21. [PMID: 11739268 DOI: 10.1093/cercor/12.2.212] [Citation(s) in RCA: 59] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Increasing evidence suggests separate auditory pattern and space processing streams. The present paper describes two magnetoencephalogram studies examining gamma-band activity to changes in auditory patterns using consonant-vowel syllables (experiment 1), animal vocalizations and artificial noises (experiment 2). Two samples of each sound type were presented to passively listening subjects in separate oddball paradigms with 80% standards and 20% deviants differing in their spectral composition. Evoked magnetic mismatch fields peaking approximately 190 ms poststimulus showed a trend for a left-hemisphere advantage for syllables, but no hemispheric differences for the other sounds. Frequency analysis and statistical probability mapping of the differences between deviants and standards revealed increased gamma-band activity above 60 Hz over left anterior temporal/ventrolateral prefrontal cortex for all three types of stimuli. This activity peaked simultaneously with the mismatch responses for animal sounds (180 ms) but was delayed for noises (260 ms) and syllables (320 ms). Our results support the hypothesized role of anterior temporal/ventral prefrontal regions in the processing of auditory pattern change. They extend earlier findings of gamma-band activity over posterior parieto-temporal cortex during auditory spatial processing that supported the putative auditory dorsal stream. Furthermore, earlier gamma-band responses to animal vocalizations may suggest faster processing of fear-relevant information.
Collapse
Affiliation(s)
- Jochen Kaiser
- MEG Center, Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany.
| | | | | | | |
Collapse
|
27
|
Kaiser J, Lutzenberger W. Location changes enhance hemispheric asymmetry of magnetic fields evoked by lateralized sounds in humans. Neurosci Lett 2001; 314:17-20. [PMID: 11698136 DOI: 10.1016/s0304-3940(01)02248-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Auditory mismatch negativity, the brain's change-detection response, has been shown to be more sensitive than other early auditory cortex responses to the hemispheric specialization of speech processing. The present study used magnetoencephalography to assess hemispheric differences in cortical evoked responses during auditory spatial processing. We compared N1m to lateralized vowels presented with equal probabilities with mismatch fields (MMNm) to rare lateralized noises interspersed in a sequence of frequent midline sounds. Both N1m and MMNm dipole amplitudes were higher in the hemisphere contralaterally to the side of sound lateralization, but this effect was about four times bigger in the mismatch paradigm. Moreover, only MMNm dipoles showed shorter latencies in the hemisphere contralaterally to stimulation. Apparently stimulus changes activate specialized auditory networks more strongly than non-deviant events.
Collapse
Affiliation(s)
- J Kaiser
- MEG-Center, Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Gartenstrasse 29, 72074 Tübingen, Germany.
| | | |
Collapse
|
28
|
Abstract
Magnetoencephalography is a technique that detects the magnetic fields associated with the intracellular current flow within neurons, unlike electroencephalography, which measures extracellular volume currents. Superconducting quantum interference devices are used to amplify these very small magnetic field signals. Magnetic source imaging is the combination of functional data derived from magnetoencephalographic recordings coregistered with structural magnetic resonance imaging (MRI). The utility of magnetic source imaging lies in the combination of the submillisecond temporal resolution of magnetoencephalography with the precise anatomic images provided by magnetic resonance imaging. As such, magnetic source imaging is a useful tool for noninvasive localization of the epileptogenic zone in children who are candidates for epilepsy surgery. Similarly, using magnetoencephalographic recordings with evoked and event-related potentials, magnetic source imaging holds great promise as a noninvasive method for precise localization of somatosensory, motor, language, visual, and auditory cortex. Finally, magnetic source imaging is proving a valuable research tool in the investigation of epilepsy, head trauma, brain plasticity, and disorders of language, memory, cognition, and executive function in children.
Collapse
Affiliation(s)
- H Otsubo
- Hospital for Sick Children, Department of Pediatrics, Faculty of Medicine, University of Toronto, ON, Canada
| | | |
Collapse
|
29
|
Mathiak K, Hertrich I, Lutzenberger W, Ackermann H. Neural correlates of duplex perception: a whole-head magnetencephalography study. Neuroreport 2001; 12:501-6. [PMID: 11234753 DOI: 10.1097/00001756-200103050-00015] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Simultaneous experience of the same acoustic stimulus in two distinct phenomenological modes, e.g. as a speech-like and as a non-speech event, is referred to as duplex perception (DP). The most widely investigated DP paradigm splits each of the stop consonant-vowel (CV) syllables /ga/ and /da/ into an isolated formant transient (chirp) and the remaining sound structure (base). The present study recorded mismatch fields in response to a series of dichotically applied base and chirp components using whole-head magnetencephalography (MEG). Preattentive mismatch fields showed larger amplitudes in response to contralateral deviants. During attention to the fused percept /da/, the left ear deviants chirps elicited an enhanced and posteriorly shifted dipole field over the ipsilateral hemisphere. These data provide first neurophysiological evidence that the integration of acoustic stimulus elements into a coherent syllable representation constitutes a distinct stage of left-hemisphere speech sound encoding.
Collapse
Affiliation(s)
- K Mathiak
- Department of Neurology, University of Tübingen, Germany
| | | | | | | |
Collapse
|
30
|
Hertrich I, Mathiak K, Lutzenberger W, Ackermann H. Differential impact of periodic and aperiodic speech-like acoustic signals on magnetic M50/M100 fields. Neuroreport 2000; 11:4017-20. [PMID: 11192621 DOI: 10.1097/00001756-200012180-00023] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Voiced and unvoiced sounds, characterized by a periodic or aperiodic acoustic structure, respectively, represent two basic information-bearing elements of the speech signal. Using whole-head magnetencephalography (MEG), magnetic fields (M50/M100) in response to synthetic vowel-like as well as noise-like signals matched in spectral envelope were recorded in 20 subjects. Aperiodic events gave rise to increased M50 concomitant with reduced M100 activity as compared to their periodic cognates. Attention toward the auditory channel enhanced the effects of signal periodicity. These data provide first evidence that speech-relevant acoustic features differentially affect evoked magnetic fields as early as the M50 component. Conceivably, the M50 field reflects an ongoing monitoring process whereas the M100 component is bound to more specific operations such as detection of signal periodicity.
Collapse
Affiliation(s)
- I Hertrich
- Department of Neurology, University of Tübingen, Germany
| | | | | | | |
Collapse
|
31
|
Abstract
Cortical processing of change in direction of a perceived sound source was investigated in 12 human subjects using whole-head magnetoencephalography. The German word "da" was presented either with or without 0.7 msec interaural time delays to create the impression of right- or left-lateralized or midline sources, respectively. Midline stimuli served as standards, and lateralized stimuli served as deviants in a mismatch paradigm. Two symmetrically linked dipoles fitted to the mismatch fields showed stronger moments in the hemisphere contralateral to the side of the deviant. The right dipole displayed equal latencies to both left and right deviants, whereas left dipole latencies were longer for ipsilateral than contralateral deviants. Frequency analysis between 20-70 Hz and statistical probability mapping revealed increased induced gamma-band activity at 53+/-2.5 Hz to both types of deviants. Right deviants elicited spectral amplitude enhancements in this frequency range, peaking at latencies of 160 and 240 msec. These effects were localized bilaterally over the angular gyri and posterior temporal regions. Coherence analysis suggested the existence of two separate interhemispheric networks. For left-lateralized deviants, both spectral amplitude enhancements at 110 and 220 msec and coherence increases were restricted to the right hemisphere. In conclusion, both mismatch dipole latencies at the supratemporal plane and gamma-band activity in posterior parietotemporal areas suggested a right hemisphere engagement in the processing of bidirectional sound-source shifts. In contrast, left-hemisphere regions responded predominantly to contralateral events. These findings may help to elucidate phenomena such as unilateral auditory neglect.
Collapse
|
32
|
Kaiser J, Lutzenberger W, Birbaumer N. Simultaneous bilateral mismatch response to right- but not leftward sound lateralization. Neuroreport 2000; 11:2889-92. [PMID: 11006960 DOI: 10.1097/00001756-200009110-00012] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Magnetoencephalography (MEG) was used to compare mismatch responses between hemispheres to changes in sound-source direction. Sixteen adults listened passively to two types of complex non-language sounds presented in separate blocks with midline standards and right- and left-lateralized deviants. Mismatch dipole amplitudes were larger contra- than ipsilaterally to the deviants. Both hemispheres processed right deviants simultaneously, whereas to left deviants, the left dipole peaked 20 ms later than the right dipole. A second experiment using the same standards but midline spectral deviants showed no interhemispheric differences. Here mismatch latencies were about 60 ms longer than in the location mismatch experiment. This suggested both fast, contralaterally dominant location mismatch responses and facilitated detection of auditory spatial deviance in the right hemifield.
Collapse
Affiliation(s)
- J Kaiser
- MEG Center, Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Germany
| | | | | |
Collapse
|
33
|
Stefan H, Hummel C, Hopfengärtner R, Pauli E, Tilz C, Ganslandt O, Kober H, Möler A, Buchfelder M. Magnetoencephalography in extratemporal epilepsy. J Clin Neurophysiol 2000; 17:190-200. [PMID: 10831110 DOI: 10.1097/00004691-200003000-00008] [Citation(s) in RCA: 59] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Epilepsy surgery candidates with extratemporal foci represent a particular diagnostic and therapeutic challenge, because of anatomic and functional features of the pertaining areas. In the last decade, novel developments in the field of electrophysiological techniques have offered new approaches to detailed localization of specific epileptic discharges as well as eloquent regions. Magnetoencephalography, in combination with neuroimaging data and simultaneously recorded EEG, yields promising results to clarify centers of epileptic activity and their relationship to structural abnormalites and functionally significant areas. Examples are given to illustrate the range of applications of this method as a contribution to routine presurgical evaluation.
Collapse
Affiliation(s)
- H Stefan
- Department of Neurology, University of Erlangen-Nürnberg, Erlangen, Germany
| | | | | | | | | | | | | | | | | |
Collapse
|
34
|
Mathiak K, Hertrich I, Lutzenberger W, Ackermann H. Preattentive processing of consonant vowel syllables at the level of the supratemporal plane: a whole-head magnetencephalography study. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 1999; 8:251-7. [PMID: 10556603 DOI: 10.1016/s0926-6410(99)00027-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
A variety of clinical and experimental data indicate superiority of the left hemisphere with respect to the encoding of dynamic aspects of the acoustic speech signal such as formant transients, i.e., fast changes of spectral energy distribution across a few tens of milliseconds, which cue the perception of stop consonant vowel syllables. Using an oddball design, the present study recorded auditory evoked magnetic fields by means of a whole-head device in response to vowels as well as syllable-like structures. Both the N1m component (=the magnetic equivalent to the N1 response of the electroencephalogram (EEG)) and various difference waves between the magnetic fields to standard and respective rare events (MMNm=magnetic mismatch negativity) were calculated. (a) Vowel mismatch (/a/ against /e/) resulted in an enlarged N1m amplitude reflecting, most presumably, peripheral adaptation processes. (b) As concerns lateralized responses to syllable-like structures, only the shortest transient duration (=10 ms) elicited a significantly enhanced MMNm at the left side. Conceivably, the observed hemispheric difference contributes to prelexical parsing of the auditory signal rather than the encoding of linguistic categories.
Collapse
Affiliation(s)
- K Mathiak
- Department of Neurology, University of Tübingen, Tübingen, Germany.
| | | | | | | |
Collapse
|