1
|
Dong C, Noppeney U, Wang S. Perceptual uncertainty explains activation differences between audiovisual congruent speech and McGurk stimuli. Hum Brain Mapp 2024; 45:e26653. [PMID: 38488460 DOI: 10.1002/hbm.26653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/20/2024] [Accepted: 02/26/2024] [Indexed: 03/19/2024] Open
Abstract
Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.
Collapse
Affiliation(s)
- Chenjie Dong
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Uta Noppeney
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
| |
Collapse
|
2
|
Krason A, Vigliocco G, Mailend ML, Stoll H, Varley R, Buxbaum LJ. Benefit of visual speech information for word comprehension in post-stroke aphasia. Cortex 2023; 165:86-100. [PMID: 37271014 PMCID: PMC10850036 DOI: 10.1016/j.cortex.2023.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 03/13/2023] [Accepted: 04/22/2023] [Indexed: 06/06/2023]
Abstract
Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.
Collapse
Affiliation(s)
- Anna Krason
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA.
| | - Gabriella Vigliocco
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Marja-Liisa Mailend
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Special Education, University of Tartu, Tartu Linn, Estonia
| | - Harrison Stoll
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Applied Cognitive and Brain Science, Drexel University, Philadelphia, PA, USA
| | | | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, PA, USA
| |
Collapse
|
3
|
Rennig J, Beauchamp MS. Intelligibility of audiovisual sentences drives multivoxel response patterns in human superior temporal cortex. Neuroimage 2022; 247:118796. [PMID: 34906712 PMCID: PMC8819942 DOI: 10.1016/j.neuroimage.2021.118796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 11/18/2021] [Accepted: 12/08/2021] [Indexed: 11/18/2022] Open
Abstract
Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.
Collapse
Affiliation(s)
- Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Richards Medical Research Building, A607, 3700 Hamilton Walk, Philadelphia, PA 19104-6016, United States.
| |
Collapse
|
4
|
Yue Q, Martin RC. Components of language processing and their long-term and working memory storage in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:109-126. [PMID: 35964966 DOI: 10.1016/b978-0-12-823493-8.00002-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
There is a consensus that the temporal lobes are involved in representing various types of information critical for language processing, including phonological (i.e., speech sound), semantic (meaning), and orthographic (spelling) representations. An important question is whether the same regions that represent our long-term knowledge of phonology, semantics, and orthography are used to support the maintenance of these types of information in working memory (WM) (for instance, maintaining semantic information during sentence comprehension), or whether regions outside the temporal lobes provide the neural basis for WM maintenance in these domains. This review focuses on the issue of whether temporal lobe regions support WM for phonological information, with a brief discussion of related findings in the semantic and orthographic domains. Across all three domains, evidence from lesion-symptom mapping and functional neuroimaging indicates that parietal or frontal regions are critical for supporting WM, with different regions supporting WM in the three domains. The distinct regions in different domains argue against these regions as playing a general attentional role. The findings imply an interaction between the temporal lobe regions housing the long-term memory representations in these domains and the frontal and parietal regions needed to maintain these representations over time.
Collapse
Affiliation(s)
- Qiuhai Yue
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Randi C Martin
- Department of Psychological Sciences, Rice University, Houston, TX, United States.
| |
Collapse
|
5
|
Michaelis K, Erickson LC, Fama ME, Skipper-Kallal LM, Xing S, Lacey EH, Anbari Z, Norato G, Rauschecker JP, Turkeltaub PE. Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
Affiliation(s)
- Kelly Michaelis
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Laura C Erickson
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Mackenzie E Fama
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, USA
| | - Laura M Skipper-Kallal
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Shihui Xing
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Elizabeth H Lacey
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA
| | - Zainab Anbari
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Peter E Turkeltaub
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA.
| |
Collapse
|
6
|
Randazzo M, Priefer R, Smith PJ, Nagler A, Avery T, Froud K. Neural Correlates of Modality-Sensitive Deviance Detection in the Audiovisual Oddball Paradigm. Brain Sci 2020; 10:brainsci10060328. [PMID: 32481538 PMCID: PMC7348766 DOI: 10.3390/brainsci10060328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 05/15/2020] [Accepted: 05/25/2020] [Indexed: 11/16/2022] Open
Abstract
The McGurk effect, an incongruent pairing of visual /ga/–acoustic /ba/, creates a fusion illusion /da/ and is the cornerstone of research in audiovisual speech perception. Combination illusions occur given reversal of the input modalities—auditory /ga/-visual /ba/, and percept /bga/. A robust literature shows that fusion illusions in an oddball paradigm evoke a mismatch negativity (MMN) in the auditory cortex, in absence of changes to acoustic stimuli. We compared fusion and combination illusions in a passive oddball paradigm to further examine the influence of visual and auditory aspects of incongruent speech stimuli on the audiovisual MMN. Participants viewed videos under two audiovisual illusion conditions: fusion with visual aspect of the stimulus changing, and combination with auditory aspect of the stimulus changing, as well as two unimodal auditory- and visual-only conditions. Fusion and combination deviants exerted similar influence in generating congruency predictions with significant differences between standards and deviants in the N100 time window. Presence of the MMN in early and late time windows differentiated fusion from combination deviants. When the visual signal changes, a new percept is created, but when the visual is held constant and the auditory changes, the response is suppressed, evoking a later MMN. In alignment with models of predictive processing in audiovisual speech perception, we interpreted our results to indicate that visual information can both predict and suppress auditory speech perception.
Collapse
Affiliation(s)
- Melissa Randazzo
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
- Correspondence: ; Tel.: +1-516-877-4769
| | - Ryan Priefer
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
| | - Paul J. Smith
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| | - Amanda Nagler
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
| | - Trey Avery
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| | - Karen Froud
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| |
Collapse
|
7
|
Basirat A, Allart É, Brunellière A, Martin Y. Audiovisual speech segmentation in post-stroke aphasia: a pilot study. Top Stroke Rehabil 2019; 26:588-594. [PMID: 31369358 DOI: 10.1080/10749357.2019.1643566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Background: Stroke may cause sentence comprehension disorders. Speech segmentation, i.e. the ability to detect word boundaries while listening to continuous speech, is an initial step allowing the successful identification of words and the accurate understanding of meaning within sentences. It has received little attention in people with post-stroke aphasia (PWA).Objectives: Our goal was to study speech segmentation in PWA and examine the potential benefit of seeing the speakers' articulatory gestures while segmenting sentences.Methods: Fourteen PWA and twelve healthy controls participated in this pilot study. Performance was measured with a word-monitoring task. In the auditory-only modality, participants were presented with auditory-only stimuli while in the audiovisual modality, visual speech cues (i.e. speaker's articulatory gestures) accompanied the auditory input. The proportion of correct responses was calculated for each participant and each modality. Visual enhancement was then calculated in order to estimate the potential benefit of seeing the speaker's articulatory gestures.Results: Both in auditory-only and audiovisual modalities, PWA performed significantly less well than controls, who had 100% correct performance in both modalities. The performance of PWA was correlated with their phonological ability. Six PWA used the visual cues. Group level analysis performed on PWA did not show any reliable difference between the auditory-only and audiovisual modalities (median of visual enhancement = 7% [Q1 - Q3: -5 - 39]).Conclusion: Our findings show that speech segmentation disorder may exist in PWA. This points to the importance of assessing and training speech segmentation after stroke. Further studies should investigate the characteristics of PWA who use visual speech cues during sentence processing.
Collapse
Affiliation(s)
- Anahita Basirat
- UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, CNRS, CHU Lille, Lille, France
| | - Étienne Allart
- Neurorehabilitation Unit, Lille University Medical Center, Lille, France.,Inserm U1171, University Lille, Degenerative and Vascular Cognitive Disorders, Lille, France
| | - Angèle Brunellière
- UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, CNRS, CHU Lille, Lille, France
| | | |
Collapse
|
8
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
9
|
Altvater-Mackensen N, Grossmann T. Modality-independent recruitment of inferior frontal cortex during speech processing in human infants. Dev Cogn Neurosci 2018; 34:130-138. [PMID: 30391756 PMCID: PMC6969291 DOI: 10.1016/j.dcn.2018.10.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 08/25/2018] [Accepted: 10/25/2018] [Indexed: 11/22/2022] Open
Abstract
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS). Our results revealed that infants recruit speech-sensitive regions in frontal cortex including IFC regardless of whether they processed unimodal or multimodal speech. We argue that IFC may play an important role in associating multimodal speech information during the early steps of language learning.
Collapse
Affiliation(s)
- Nicole Altvater-Mackensen
- Department of Psychology, Johannes-Gutenberg-University Mainz, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tobias Grossmann
- Department of Psychology, University of Virginia, USA; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
10
|
Proverbio AM, Raso G, Zani A. Electrophysiological Indexes of Incongruent Audiovisual Phonemic Processing: Unraveling the McGurk Effect. Neuroscience 2018; 385:215-226. [PMID: 29932985 DOI: 10.1016/j.neuroscience.2018.06.021] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Revised: 06/11/2018] [Accepted: 06/12/2018] [Indexed: 11/15/2022]
Abstract
In this study the timing of electromagnetic signals recorded during incongruent and congruent audiovisual (AV) stimulation in 14 Italian healthy volunteers was examined. In a previous study (Proverbio et al., 2016) we investigated the McGurk effect in the Italian language and found out which visual and auditory inputs provided the most compelling illusory effects (e.g., bilabial phonemes presented acoustically and paired with non-labials, especially alveolar-nasal and velar-occlusive phonemes). In this study EEG was recorded from 128 scalp sites while participants observed a female and a male actor uttering 288 syllables selected on the basis of the previous investigation (lasting approximately 600 ms) and responded to rare targets (/re/, /ri/, /ro/, /ru/). In half of the cases the AV information was incongruent, except for targets that were always congruent. A pMMN (phonological Mismatch Negativity) to incongruent AV stimuli was identified 500 ms after voice onset time. This automatic response indexed the detection of an incongruity between the labial and phonetic information. SwLORETA (Low-Resolution Electromagnetic Tomography) analysis applied to the difference voltage incongruent-congruent in the same time window revealed that the strongest sources of this activity were the right superior temporal (STG) and superior frontal gyri, which supports their involvement in AV integration.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Neuro-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Italy.
| | - Giulia Raso
- Neuro-Mi Center for Neuroscience, Dept. of Psychology, University of Milano-Bicocca, Italy
| | | |
Collapse
|
11
|
Alsius A, Paré M, Munhall KG. Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited. Multisens Res 2018; 31:111-144. [PMID: 31264597 DOI: 10.1163/22134808-00002565] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 03/09/2017] [Indexed: 11/19/2022]
Abstract
Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.
Collapse
Affiliation(s)
- Agnès Alsius
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| | - Martin Paré
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| | - Kevin G Munhall
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| |
Collapse
|
12
|
Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia. Neuroscience 2017; 356:1-10. [DOI: 10.1016/j.neuroscience.2017.05.017] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Revised: 04/29/2017] [Accepted: 05/09/2017] [Indexed: 12/29/2022]
|
13
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
14
|
Kawase T, Yahata I, Kanno A, Sakamoto S, Takanashi Y, Takata S, Nakasato N, Kawashima R, Katori Y. Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study. PLoS One 2016; 11:e0168740. [PMID: 28030631 PMCID: PMC5193434 DOI: 10.1371/journal.pone.0168740] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/05/2016] [Indexed: 11/18/2022] Open
Abstract
The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage.
Collapse
Affiliation(s)
- Tetsuaki Kawase
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, Sendai, Miyagi, Japan
- Department of Audiology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
- * E-mail:
| | - Izumi Yahata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Akitake Kanno
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Shuichi Sakamoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi, Japan
| | - Yoshitaka Takanashi
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Shiho Takata
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Nobukazu Nakasato
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer, Tohoku University, Sendai, Miyagi, Japan
| | - Yukio Katori
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, Sendai, Miyagi, Japan
| |
Collapse
|
15
|
Meaux E, Vuilleumier P. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks. Neuroimage 2016; 141:154-173. [DOI: 10.1016/j.neuroimage.2016.07.004] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Revised: 06/26/2016] [Accepted: 07/02/2016] [Indexed: 11/27/2022] Open
|
16
|
Ille S, Kulchytska N, Sollmann N, Wittig R, Beurskens E, Butenschoen VM, Ringel F, Vajkoczy P, Meyer B, Picht T, Krieg SM. Hemispheric language dominance measured by repetitive navigated transcranial magnetic stimulation and postoperative course of language function in brain tumor patients. Neuropsychologia 2016; 91:50-60. [DOI: 10.1016/j.neuropsychologia.2016.07.025] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2016] [Revised: 07/17/2016] [Accepted: 07/19/2016] [Indexed: 10/21/2022]
|
17
|
Slevc LR, Martin RC. Syntactic agreement attraction reflects working memory processes. JOURNAL OF COGNITIVE PSYCHOLOGY 2016. [DOI: 10.1080/20445911.2016.1202252] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
18
|
Fercho K, Baugh LA, Hanson EK. Effects of Alphabet-Supplemented Speech on Brain Activity of Listeners: An fMRI Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:1452-1463. [PMID: 26254449 DOI: 10.1044/2015_jslhr-s-14-0038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2014] [Accepted: 07/08/2015] [Indexed: 06/04/2023]
Abstract
PURPOSE The purpose of this article was to examine the neural mechanisms associated with increases in speech intelligibility brought about through alphabet supplementation. METHOD Neurotypical participants listened to dysarthric speech while watching an accompanying video of a hand pointing to the 1st letter spoken of each word on an alphabet display (treatment condition) or a scrambled display (control condition). Their hemodynamic response was measured with functional magnetic resonance imaging, using a sparse sampling event-related paradigm. Speech intelligibility was assessed via a forced-choice auditory identification task throughout the scanning session. RESULTS Alphabet supplementation was associated with significant increases in speech intelligibility. Further, alphabet supplementation increased activation in brain regions known to be involved in both auditory speech and visual letter perception above that seen with the scrambled display. Significant increases in functional activity were observed within the posterior to mid superior temporal sulcus/superior temporal gyrus during alphabet supplementation, regions known to be involved in speech processing and audiovisual integration. CONCLUSION Alphabet supplementation is an effective tool for increasing the intelligibility of degraded speech and is associated with changes in activity within audiovisual integration sites. Changes in activity within the superior temporal sulcus/superior temporal gyrus may be related to the behavioral increases in intelligibility brought about by this augmented communication method.
Collapse
|
19
|
Lüttke CS, Ekman M, van Gerven MAJ, de Lange FP. Preference for Audiovisual Speech Congruency in Superior Temporal Cortex. J Cogn Neurosci 2015; 28:1-7. [PMID: 26351991 DOI: 10.1162/jocn_a_00874] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Auditory speech perception can be altered by concurrent visual information. The superior temporal cortex is an important combining site for this integration process. This area was previously found to be sensitive to audiovisual congruency. However, the direction of this congruency effect (i.e., stronger or weaker activity for congruent compared to incongruent stimulation) has been more equivocal. Here, we used fMRI to look at the neural responses of human participants during the McGurk illusion--in which auditory /aba/ and visual /aga/ inputs are fused to perceived /ada/--in a large homogenous sample of participants who consistently experienced this illusion. This enabled us to compare the neuronal responses during congruent audiovisual stimulation with incongruent audiovisual stimulation leading to the McGurk illusion while avoiding the possible confounding factor of sensory surprise that can occur when McGurk stimuli are only occasionally perceived. We found larger activity for congruent audiovisual stimuli than for incongruent (McGurk) stimuli in bilateral superior temporal cortex, extending into the primary auditory cortex. This finding suggests that superior temporal cortex prefers when auditory and visual input support the same representation.
Collapse
|
20
|
Riedel P, Ragert P, Schelinski S, Kiebel SJ, von Kriegstein K. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition. Cortex 2015; 68:86-99. [DOI: 10.1016/j.cortex.2014.11.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 10/24/2014] [Accepted: 11/25/2014] [Indexed: 12/31/2022]
|
21
|
Ille S, Sollmann N, Hauck T, Maurer S, Tanigawa N, Obermueller T, Negwer C, Droese D, Zimmer C, Meyer B, Ringel F, Krieg SM. Combined noninvasive language mapping by navigated transcranial magnetic stimulation and functional MRI and its comparison with direct cortical stimulation. J Neurosurg 2015; 123:212-25. [DOI: 10.3171/2014.9.jns14929] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
OBJECT
Repetitive navigated transcranial magnetic stimulation (rTMS) is now increasingly used for preoperative language mapping in patients with lesions in language-related areas of the brain. Yet its correlation with intraoperative direct cortical stimulation (DCS) has to be improved. To increase rTMS's specificity and positive predictive value, the authors aim to provide thresholds for rTMS's positive language areas. Moreover, they propose a protocol for combining rTMS with functional MRI (fMRI) to combine the strength of both methods.
METHODS
The authors performed multimodal language mapping in 35 patients with left-sided perisylvian lesions by using rTMS, fMRI, and DCS. The rTMS mappings were conducted with a picture-to-trigger interval (PTI, time between stimulus presentation and stimulation onset) of either 0 or 300 msec. The error rates (ERs; that is, the number of errors per number of stimulations) were calculated for each region of the cortical parcellation system (CPS). Subsequently, the rTMS mappings were analyzed through different error rate thresholds (ERT; that is, the ER at which a CPS region was defined as language positive in terms of rTMS), and the 2-out-of-3 rule (a stimulation site was defined as language positive in terms of rTMS if at least 2 out of 3 stimulations caused an error). As a second step, the authors combined the results of fMRI and rTMS in a predefined protocol of combined noninvasive mapping. To validate this noninvasive protocol, they correlated its results to DCS during awake surgery.
RESULTS
The analysis by different rTMS ERTs obtained the highest correlation regarding sensitivity and a low rate of false positives for the ERTs of 15%, 20%, 25%, and the 2-out-of-3 rule. However, when comparing the combined fMRI and rTMS results with DCS, the authors observed an overall specificity of 83%, a positive predictive value of 51%, a sensitivity of 98%, and a negative predictive value of 95%.
CONCLUSIONS
In comparison with fMRI, rTMS is a more sensitive but less specific tool for preoperative language mapping than DCS. Moreover, rTMS is most reliable when using ERTs of 15%, 20%, 25%, or the 2-out-of-3 rule and a PTI of 0 msec. Furthermore, the combination of fMRI and rTMS leads to a higher correlation to DCS than both techniques alone, and the presented protocols for combined noninvasive language mapping might play a supportive role in the language-mapping assessment prior to the gold-standard intraoperative DCS.
Collapse
Affiliation(s)
| | | | | | | | - Noriko Tanigawa
- 5Faculty of Linguistics, Philology, & Phonetics, University of Oxford, United Kingdom
| | | | | | - Doris Droese
- 4Department of Anesthesiology, Klinikum rechts der Isar, Technische Universität München, Munich, Germany; and
| | - Claus Zimmer
- 2TUM-Neuroimaging Center
- 3Section of Neuroradiology, Department of Radiology; and
| | | | | | | |
Collapse
|
22
|
Cortical distribution of speech and language errors investigated by visual object naming and navigated transcranial magnetic stimulation. Brain Struct Funct 2015; 221:2259-86. [DOI: 10.1007/s00429-015-1042-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Accepted: 04/03/2015] [Indexed: 01/07/2023]
|
23
|
Language and its right-hemispheric distribution in healthy brains: An investigation by repetitive transcranial magnetic stimulation. Neuroimage 2014; 102 Pt 2:776-88. [PMID: 25219508 DOI: 10.1016/j.neuroimage.2014.09.002] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Revised: 07/15/2014] [Accepted: 09/02/2014] [Indexed: 01/10/2023] Open
|
24
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 200] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
25
|
Chen T, Michels L, Supekar K, Kochalka J, Ryali S, Menon V. Role of the anterior insular cortex in integrative causal signaling during multisensory auditory-visual attention. Eur J Neurosci 2014; 41:264-74. [PMID: 25352218 DOI: 10.1111/ejn.12764] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Revised: 09/24/2014] [Accepted: 09/26/2014] [Indexed: 12/19/2022]
Abstract
Coordinated attention to information from multiple senses is fundamental to our ability to respond to salient environmental events, yet little is known about brain network mechanisms that guide integration of information from multiple senses. Here we investigate dynamic causal mechanisms underlying multisensory auditory-visual attention, focusing on a network of right-hemisphere frontal-cingulate-parietal regions implicated in a wide range of tasks involving attention and cognitive control. Participants performed three 'oddball' attention tasks involving auditory, visual and multisensory auditory-visual stimuli during fMRI scanning. We found that the right anterior insula (rAI) demonstrated the most significant causal influences on all other frontal-cingulate-parietal regions, serving as a major causal control hub during multisensory attention. Crucially, we then tested two competing models of the role of the rAI in multisensory attention: an 'integrated' signaling model in which the rAI generates a common multisensory control signal associated with simultaneous attention to auditory and visual oddball stimuli versus a 'segregated' signaling model in which the rAI generates two segregated and independent signals in each sensory modality. We found strong support for the integrated, rather than the segregated, signaling model. Furthermore, the strength of the integrated control signal from the rAI was most pronounced on the dorsal anterior cingulate and posterior parietal cortices, two key nodes of saliency and central executive networks respectively. These results were preserved with the addition of a superior temporal sulcus region involved in multisensory processing. Our study provides new insights into the dynamic causal mechanisms by which the AI facilitates multisensory attention.
Collapse
Affiliation(s)
- Tianwen Chen
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 Quarry Road, Stanford, CA, 94305, USA
| | | | | | | | | | | |
Collapse
|
26
|
Optimal timing of pulse onset for language mapping with navigated repetitive transcranial magnetic stimulation. Neuroimage 2014; 100:219-36. [DOI: 10.1016/j.neuroimage.2014.06.016] [Citation(s) in RCA: 82] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Revised: 04/21/2014] [Accepted: 06/06/2014] [Indexed: 11/22/2022] Open
|
27
|
Erickson LC, Heeg E, Rauschecker JP, Turkeltaub PE. An ALE meta-analysis on the audiovisual integration of speech signals. Hum Brain Mapp 2014; 35:5587-605. [PMID: 24996043 DOI: 10.1002/hbm.22572] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 05/28/2014] [Accepted: 06/24/2014] [Indexed: 11/09/2022] Open
Abstract
The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals.
Collapse
Affiliation(s)
- Laura C Erickson
- Department of Neurology, Georgetown University Medical Center, Washington, District of Columbia; Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia
| | | | | | | |
Collapse
|
28
|
Krieg SM, Sollmann N, Hauck T, Ille S, Meyer B, Ringel F. Repeated mapping of cortical language sites by preoperative navigated transcranial magnetic stimulation compared to repeated intraoperative DCS mapping in awake craniotomy. BMC Neurosci 2014; 15:20. [PMID: 24479694 PMCID: PMC3909378 DOI: 10.1186/1471-2202-15-20] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2013] [Accepted: 01/17/2014] [Indexed: 11/10/2022] Open
Abstract
Background Repetitive navigated transcranial magnetic stimulation (rTMS) was recently described for mapping of human language areas. However, its capability of detecting language plasticity in brain tumor patients was not proven up to now. Thus, this study was designed to evaluate such data in order to compare rTMS language mapping to language mapping during repeated awake surgery during follow-up in patients suffering from language-eloquent gliomas. Methods Three right-handed patients with left-sided gliomas (2 opercular glioblastomas, 1 astrocytoma WHO grade III of the angular gyrus) underwent preoperative language mapping by rTMS as well as intraoperative language mapping provided via direct cortical stimulation (DCS) for initial as well as for repeated Resection 7, 10, and 15 months later. Results Overall, preoperative rTMS was able to elicit clear language errors in all mappings. A good correlation between initial rTMS and DCS results was observed. As a consequence of brain plasticity, initial DCS and rTMS findings only corresponded with the results obtained during the second examination in one out of three patients thus suggesting changes of language organization in two of our three patients. Conclusions This report points out the usefulness but also the limitations of preoperative rTMS language mapping to detect plastic changes in language function or for long-term follow-up prior to DCS even in recurrent gliomas. However, DCS still has to be regarded as gold standard.
Collapse
Affiliation(s)
| | | | | | | | | | - Florian Ringel
- Department of Neurosurgery, Klinikum rechts der Isar, Technische Universität München, Ismaninger Straße 22, Munich, 81675, Germany.
| |
Collapse
|
29
|
Krieg SM, Sollmann N, Hauck T, Ille S, Foerschler A, Meyer B, Ringel F. Functional language shift to the right hemisphere in patients with language-eloquent brain tumors. PLoS One 2013; 8:e75403. [PMID: 24069410 PMCID: PMC3775731 DOI: 10.1371/journal.pone.0075403] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Accepted: 08/13/2013] [Indexed: 12/17/2022] Open
Abstract
Objectives Language function is mainly located within the left hemisphere of the brain, especially in right-handed subjects. However, functional MRI (fMRI) has demonstrated changes of language organization in patients with left-sided perisylvian lesions to the right hemisphere. Because intracerebral lesions can impair fMRI, this study was designed to investigate human language plasticity with a virtual lesion model using repetitive navigated transcranial magnetic stimulation (rTMS). Experimental design Fifteen patients with lesions of left-sided language-eloquent brain areas and 50 healthy and purely right-handed participants underwent bilateral rTMS language mapping via an object-naming task. All patients were proven to have left-sided language function during awake surgery. The rTMS-induced language errors were categorized into 6 different error types. The error ratio (induced errors/number of stimulations) was determined for each brain region on both hemispheres. A hemispheric dominance ratio was then defined for each region as the quotient of the error ratio (left/right) of the corresponding area of both hemispheres (ratio >1 = left dominant; ratio <1 = right dominant). Results Patients with language-eloquent lesions showed a statistically significantly lower ratio than healthy participants concerning “all errors” and “all errors without hesitations”, which indicates a higher participation of the right hemisphere in language function. Yet, there was no cortical region with pronounced difference in language dominance compared to the whole hemisphere. Conclusions This is the first study that shows by means of an anatomically accurate virtual lesion model that a shift of language function to the non-dominant hemisphere can occur.
Collapse
Affiliation(s)
- Sandro M. Krieg
- Department of Neurosurgery; Klinikum rechts der Isar, Technische Universität München, Germany
| | - Nico Sollmann
- Department of Neurosurgery; Klinikum rechts der Isar, Technische Universität München, Germany
| | - Theresa Hauck
- Department of Neurosurgery; Klinikum rechts der Isar, Technische Universität München, Germany
| | - Sebastian Ille
- Department of Neurosurgery; Klinikum rechts der Isar, Technische Universität München, Germany
| | - Annette Foerschler
- Section of Neuroradiology; Klinikum rechts der Isar, Technische Universität München, Germany
| | - Bernhard Meyer
- Department of Neurosurgery; Klinikum rechts der Isar, Technische Universität München, Germany
| | - Florian Ringel
- Department of Neurosurgery; Klinikum rechts der Isar, Technische Universität München, Germany
- * E-mail:
| |
Collapse
|
30
|
Sinke C, Neufeld J, Zedler M, Emrich HM, Bleich S, Münte TF, Szycik GR. Reduced audiovisual integration in synesthesia--evidence from bimodal speech perception. J Neuropsychol 2012; 8:94-106. [PMID: 23279836 DOI: 10.1111/jnp.12006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Revised: 10/24/2012] [Accepted: 10/24/2012] [Indexed: 11/28/2022]
Abstract
Recent research suggests synesthesia as a result of a hypersensitive multimodal binding mechanism. To address the question whether multimodal integration is altered in synesthetes in general, grapheme-colour and auditory-visual synesthetes were investigated using speech-related stimulation in two behavioural experiments. First, we used the McGurk illusion to test the strength and number of illusory perceptions in synesthesia. In a second step, we analysed the gain in speech perception coming from seen articulatory movements under acoustically noisy conditions. We used disyllabic nouns as stimulation and varied signal-to-noise ratio of the auditory stream presented concurrently to a matching video of the speaker. We hypothesized that if synesthesia is due to a general hyperbinding mechanism this group of subjects should be more susceptible to McGurk illusions and profit more from the visual information during audiovisual speech perception. The results indicate that there are differences between synesthetes and controls concerning multisensory integration--but in the opposite direction as hypothesized. Synesthetes showed a reduced number of illusions and had a reduced gain in comprehension by viewing matching articulatory movements in comparison to control subjects. Our results indicate that rather than having a hypersensitive binding mechanism, synesthetes show weaker integration of vision and audition.
Collapse
Affiliation(s)
- Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany; Center of Systems Neuroscience, Hanover, Germany
| | | | | | | | | | | | | |
Collapse
|