101
|
Foxe JJ, Molholm S, Del Bene VA, Frey HP, Russo NN, Blanco D, Saint-Amour D, Ross LA. Severe multisensory speech integration deficits in high-functioning school-aged children with Autism Spectrum Disorder (ASD) and their resolution during early adolescence. ACTA ACUST UNITED AC 2013; 25:298-312. [PMID: 23985136 DOI: 10.1093/cercor/bht213] [Citation(s) in RCA: 142] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Under noisy listening conditions, visualizing a speaker's articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a child's ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5-12 year olds), but were fully ameliorated in ASD children entering adolescence (13-15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children.
Collapse
Affiliation(s)
- John J Foxe
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA Department of Biology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA
| | - Sophie Molholm
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA Department of Biology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA
| | - Victor A Del Bene
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Ferkauf Graduate School of Psychology, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Hans-Peter Frey
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC)
| | - Natalie N Russo
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, Syracuse University, Syracuse, NY 13244, USA
| | - Daniella Blanco
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA Department of Biology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA
| | - Dave Saint-Amour
- Centre de Recherche, CHU Sainte-Justine, 3175, Côte-Sainte-Catherine Montréal, Montréal, QC, Canada H3T 1C5 Département de Psychologie, Université du Québec à Montréal (UQAM), Montréal, QC, Canada H3C 3P8 and
| | - Lars A Ross
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), The Gordon F. Derner Institute of Advanced Psychological Studies, Adelphi University, Garden City, NY 11530, USA
| |
Collapse
|
102
|
Stekelenburg JJ, Maes JP, Van Gool AR, Sitskoorn M, Vroomen J. Deficient multisensory integration in schizophrenia: an event-related potential study. Schizophr Res 2013; 147:253-61. [PMID: 23707640 DOI: 10.1016/j.schres.2013.04.038] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2012] [Revised: 04/10/2013] [Accepted: 04/27/2013] [Indexed: 11/17/2022]
Abstract
BACKGROUND In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. METHODS The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. RESULTS For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. CONCLUSIONS The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia.
Collapse
Affiliation(s)
- Jeroen J Stekelenburg
- Tilburg University, Department of Cognitive Neuropsychology, P.O. Box 90153, Warandelaan 2, 5000 LE Tilburg, The Netherlands.
| | | | | | | | | |
Collapse
|
103
|
Brandwein AB, Foxe JJ, Butler JS, Russo NN, Altschuler TS, Gomes H, Molholm S. The development of multisensory integration in high-functioning autism: high-density electrical mapping and psychophysical measures reveal impairments in the processing of audiovisual inputs. Cereb Cortex 2013; 23:1329-41. [PMID: 22628458 PMCID: PMC3643715 DOI: 10.1093/cercor/bhs109] [Citation(s) in RCA: 151] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Successful integration of auditory and visual inputs is crucial for both basic perceptual functions and for higher-order processes related to social cognition. Autism spectrum disorders (ASD) are characterized by impairments in social cognition and are associated with abnormalities in sensory and perceptual processes. Several groups have reported that individuals with ASD are impaired in their ability to integrate socially relevant audiovisual (AV) information, and it has been suggested that this contributes to the higher-order social and cognitive deficits observed in ASD. However, successful integration of auditory and visual inputs also influences detection and perception of nonsocial stimuli, and integration deficits may impair earlier stages of information processing, with cascading downstream effects. To assess the integrity of basic AV integration, we recorded high-density electrophysiology from a cohort of high-functioning children with ASD (7-16 years) while they performed a simple AV reaction time task. Children with ASD showed considerably less behavioral facilitation to multisensory inputs, deficits that were paralleled by less effective neural integration. Evidence for processing differences relative to typically developing children was seen as early as 100 ms poststimulation, and topographic analysis suggested that children with ASD relied on different cortical networks during this early multisensory processing stage.
Collapse
Affiliation(s)
- Alice B Brandwein
- Department of Pediatrics, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children’s Evaluation and Rehabilitation Center, Albert Einstein College of Medicine, Van Etten Building-Wing 1C, 1225 Morris Park Avenue, Bronx, NY 10461, USA
| | | | | | | | | | | | | |
Collapse
|
104
|
Germine L, Benson TL, Cohen F, Hooker CI. Psychosis-proneness and the rubber hand illusion of body ownership. Psychiatry Res 2013; 207:45-52. [PMID: 23273611 DOI: 10.1016/j.psychres.2012.11.022] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2012] [Revised: 08/25/2012] [Accepted: 11/12/2012] [Indexed: 10/27/2022]
Abstract
Psychosis and psychosis-proneness are associated with abnormalities in subjective experience of the self, including distortions in bodily experience that are difficult to study experimentally due to lack of structured methods. In 55 healthy adults, we assessed the relationship between self-reported psychosis-like characteristics and susceptibility to the rubber hand illusion of body ownership. In this illusion, a participant sees a rubber hand being stroked by a brush at the same time that they feel a brush stroking their own hand. In some individuals, this creates the bodily sense that the rubber hand is their own hand. Individual differences in positive (but not negative) psychosis-like characteristics predicted differences in susceptibility to experiencing the rubber hand illusion. This relationship was specific to the subjective experience of rubber hand ownership, and not other unusual experiences or sensations, and absent when a small delay was introduced between seeing and feeling the brush stroke. This indicates that individual differences in susceptibility are related to visual-tactile integration and cannot be explained by differences in the tendency to endorse unusual experiences. Our findings suggest that susceptibility to body representation distortion by sensory information may be related or contribute to the development of psychosis and positive psychosis-like characteristics.
Collapse
Affiliation(s)
- Laura Germine
- Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, USA.
| | | | | | | |
Collapse
|
105
|
Liu B, Lin Y, Gao X, Dang J. Correlation between audio-visual enhancement of speech in different noise environments and SNR: a combined behavioral and electrophysiological study. Neuroscience 2013; 247:145-51. [PMID: 23673276 DOI: 10.1016/j.neuroscience.2013.05.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2013] [Revised: 05/02/2013] [Accepted: 05/03/2013] [Indexed: 11/30/2022]
Abstract
In the present study, we investigated the multisensory gain as the difference of speech recognition accuracies between the audio-visual (AV) and auditory-only (A) conditions, and the multisensory gain as the difference between the event-related potentials (ERPs) evoked under the AV condition and the sum of the ERPs evoked under the A and visual-only (V) conditions in different noise environments. Videos of a female speaker articulating the Chinese monosyllable words accompanied with different levels of pink noise were used as the stimulus materials. The selected signal-to-noise ratios (SNRs) were -16, -12, -8, -4 and 0 dB. Under the A, V and AV conditions the accuracy of the speech recognition was measured and the ERPs evoked under different conditions were analyzed, respectively. The behavioral results showed that the maximum gain as the difference of speech recognition accuracies between the AV and A conditions was at the -12 dB SNR. The ERP results showed that the multisensory gain as the difference between the ERPs evoked under the AV condition and the sum of ERPs evoked under the A and V conditions at the -12 dB SNR was significantly higher than those at the other SNRs in the time window of 130-200 ms in the area from frontal to central region. The multisensory gains in audio-visual speech recognition at different SNRs were not completely accordant with the principle of inverse effectiveness, but confirmed to cross-modal stochastic resonance.
Collapse
Affiliation(s)
- B Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300072, PR China.
| | | | | | | |
Collapse
|
106
|
Temporal event structure and timing in schizophrenia: Preserved binding in a longer “now”. Neuropsychologia 2013; 51:358-71. [DOI: 10.1016/j.neuropsychologia.2012.07.002] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2012] [Revised: 07/01/2012] [Accepted: 07/05/2012] [Indexed: 11/24/2022]
|
107
|
Schepers IM, Schneider TR, Hipp JF, Engel AK, Senkowski D. Noise alters beta-band activity in superior temporal cortex during audiovisual speech processing. Neuroimage 2012; 70:101-12. [PMID: 23274182 DOI: 10.1016/j.neuroimage.2012.11.066] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2012] [Revised: 11/13/2012] [Accepted: 11/21/2012] [Indexed: 10/27/2022] Open
Abstract
Speech recognition is improved when complementary visual information is available, especially under noisy acoustic conditions. Functional neuroimaging studies have suggested that the superior temporal sulcus (STS) plays an important role for this improvement. The spectrotemporal dynamics underlying audiovisual speech processing in the STS, and how these dynamics are affected by auditory noise, are not well understood. Using electroencephalography, we investigated how auditory noise affects audiovisual speech processing in event-related potentials (ERPs) and oscillatory activity. Spoken syllables were presented in audiovisual (AV) and auditory only (A) trials at three different auditory noise levels (no, low, and high). Responses to A stimuli were subtracted from responses to AV stimuli, separately for each noise level, and these responses were subjected to the statistical analysis. Central ERPs differed between the no noise and the two noise conditions from 130 to 150 ms and 170 to 210 ms after auditory stimulus onset. Source localization using the local autoregressive average procedure revealed an involvement of the lateral temporal lobe, encompassing the superior and middle temporal gyrus. Neuronal activity in the beta-band (16 to 32 Hz) was suppressed at central channels around 100 to 400 ms after auditory stimulus onset in the averaged AV minus A signal over the three noise levels. This suppression was smaller in the high noise compared to the no noise and low noise condition, possibly reflecting disturbed recognition or altered processing of multisensory speech stimuli. Source analysis of the beta-band effect using linear beamforming demonstrated an involvement of the STS. Our study shows that auditory noise alters audiovisual speech processing in ERPs localized to lateral temporal lobe and provides evidence that beta-band activity in the STS plays a role for audiovisual speech processing under regular and noisy acoustic conditions.
Collapse
Affiliation(s)
- Inga M Schepers
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| | | | | | | | | |
Collapse
|
108
|
Müller VI, Kellermann TS, Seligman SC, Turetsky BI, Eickhoff SB. Modulation of affective face processing deficits in Schizophrenia by congruent emotional sounds. Soc Cogn Affect Neurosci 2012; 9:436-44. [PMID: 22977201 PMCID: PMC3989119 DOI: 10.1093/scan/nss107] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Schizophrenia is a psychiatric disorder resulting in prominent impairments in social functioning. Thus, clinical research has focused on underlying deficits of emotion processing and their linkage to specific symptoms and neurobiological dysfunctions. Although there is substantial research investigating impairments in unimodal affect recognition, studies in schizophrenia exploring crossmodal emotion processing are rare. Therefore, event-related potentials were measured in 15 patients with schizophrenia and 15 healthy controls while rating the expression of happy, fearful and neutral faces and concurrently being distracted by emotional or neutral sounds. Compared with controls, patients with schizophrenia revealed significantly decreased P1 and increased P2 amplitudes in response to all faces, independent of emotion or concurrent sound. Analyzing these effects with regard to audiovisual (in)congruence revealed that P1 amplitudes in patients were only reduced in response to emotionally incongruent stimulus pairs, whereas similar amplitudes between groups could be observed for congruent conditions. Correlation analyses revealed a significant negative correlation between general symptom severity (Brief Psychiatric Rating Scale-V4) and P1 amplitudes in response to congruent audiovisual stimulus pairs. These results indicate that early visual processing deficits in schizophrenia are apparent during emotion processing but, depending on symptom severity, these deficits can be restored by presenting concurrent emotionally congruent sounds.
Collapse
Affiliation(s)
- Veronika I Müller
- Department of Neuroscience and Medicine, INM-1, Research Center Jülich, Leo-Brandt-Straße, D-52428 Jülich, Germany.
| | | | | | | | | |
Collapse
|
109
|
Krause H, Schneider TR, Engel AK, Senkowski D. Capture of visual attention interferes with multisensory speech processing. Front Integr Neurosci 2012; 6:67. [PMID: 22973204 PMCID: PMC3434358 DOI: 10.3389/fnint.2012.00067] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 08/20/2012] [Indexed: 11/21/2022] Open
Abstract
Attending to a conversation in a crowded scene requires selection of relevant information, while ignoring other distracting sensory input, such as speech signals from surrounding people. The neural mechanisms of how distracting stimuli influence the processing of attended speech are not well understood. In this high-density electroencephalography (EEG) study, we investigated how different types of speech and non-speech stimuli influence the processing of attended audiovisual speech. Participants were presented with three horizontally aligned speakers who produced syllables. The faces of the three speakers flickered at specific frequencies (19 Hz for flanking speakers and 25 Hz for the center speaker), which induced steady-state visual evoked potentials (SSVEP) in the EEG that served as a measure of visual attention. The participants' task was to detect an occasional audiovisual target syllable produced by the center speaker, while ignoring distracting signals originating from the two flanking speakers. In all experimental conditions the center speaker produced a bimodal audiovisual syllable. In three distraction conditions, which were contrasted with a no-distraction control condition, the flanking speakers either produced audiovisual speech, moved their lips, and produced acoustic noise, or moved their lips without producing an auditory signal. We observed behavioral interference in the reaction times (RTs) in particular when the flanking speakers produced naturalistic audiovisual speech. These effects were paralleled by enhanced 19 Hz SSVEP, indicative of a stimulus-driven capture of attention toward the interfering speakers. Our study provides evidence that non-relevant audiovisual speech signals serve as highly salient distractors, which capture attention in a stimulus-driven fashion.
Collapse
Affiliation(s)
- Hanna Krause
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf Hamburg, Germany
| | | | | | | |
Collapse
|
110
|
Jacklin DL, Goel A, Clementino KJ, Hall AWM, Talpos JC, Winters BD. Severe cross-modal object recognition deficits in rats treated sub-chronically with NMDA receptor antagonists are reversed by systemic nicotine: implications for abnormal multisensory integration in schizophrenia. Neuropsychopharmacology 2012; 37:2322-31. [PMID: 22669170 PMCID: PMC3422496 DOI: 10.1038/npp.2012.84] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Schizophrenia is a complex and debilitating disorder, characterized by positive, negative, and cognitive symptoms. Among the cognitive deficits observed in patients with schizophrenia, recent work has indicated abnormalities in multisensory integration, a process that is important for the formation of comprehensive environmental percepts and for the appropriate guidance of behavior. Very little is known about the neural bases of such multisensory integration deficits, partly because of the lack of viable behavioral tasks to assess this process in animal models. In this study, we used our recently developed rodent cross-modal object recognition (CMOR) task to investigate multisensory integration functions in rats treated sub-chronically with one of two N-methyl-D-aspartate receptor (NMDAR) antagonists, MK-801, or ketamine; such treatment is known to produce schizophrenia-like symptoms. Rats treated with the NMDAR antagonists were impaired on the standard spontaneous object recognition (SOR) task, unimodal (tactile or visual only) versions of SOR, and the CMOR task with intermediate to long retention delays between acquisition and testing phases, but they displayed a selective CMOR task deficit when mnemonic demand was minimized. This selective impairment in multisensory information processing was dose-dependently reversed by acute systemic administration of nicotine. These findings suggest that persistent NMDAR hypofunction may contribute to the multisensory integration deficits observed in patients with schizophrenia and highlight the valuable potential of the CMOR task to facilitate further systematic investigation of the neural bases of, and potential treatments for, this hitherto overlooked aspect of cognitive dysfunction in schizophrenia.
Collapse
Affiliation(s)
- Derek L Jacklin
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Amit Goel
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Kyle J Clementino
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - Alexander W M Hall
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada
| | - John C Talpos
- Translational Research, Janssen Pharmaceutical Companies of Johnson & Johnson, Beerse, Belgium
| | - Boyer D Winters
- Department of Psychology and Collaborative Neuroscience Program, University of Guelph, Guelph, ON, Canada,Department of Psychology, University of Guelph, Guelph, ON, Canada N1G 2W1, Tel: +519 824 4120 (52163), Fax: +519 837 8629, E-mail:
| |
Collapse
|
111
|
Yang L, Chen S, Chen CM, Khan F, Forchelli G, Javitt DC. Schizophrenia, culture and neuropsychology: sensory deficits, language impairments and social functioning in Chinese-speaking schizophrenia patients. Psychol Med 2012; 42:1485-1494. [PMID: 22099474 DOI: 10.1017/s0033291711002224] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND While 20% of schizophrenia patients worldwide speak tonal languages (e.g. Mandarin), studies are limited to Western-language patients. Western-language patients show tonal deficits that are related to impaired emotional processing of speech. However, language processing is minimally affected. In contrast, in Mandarin, syllables are voiced in one of four tones, with word meaning varying accordingly. We hypothesized that Mandarin-speaking schizophrenia patients would show impairments in underlying basic auditory processing that, unlike in Western groups, would relate to deficits in word recognition and social outcomes. METHOD Altogether, 22 Mandarin-speaking schizophrenia patients and 44 matched healthy participants were recruited from New York City. The auditory tasks were: (1) tone matching; (2) distorted tunes; (3) Chinese word discrimination; (4) Chinese word identification. Social outcomes were measured by marital status, employment and most recent employment status. RESULTS Patients showed deficits in tone-matching, distorted tunes, word discrimination and word identification versus controls (all p<0.0001). Impairments in tone-matching across groups correlated with both word identification (p<0.0001) and discrimination (p<0.0001). On social outcomes, tonally impaired patients had 'lower-status' jobs overall when compared with tonally intact patients (p<0.005) and controls (p<0.0001). CONCLUSIONS Our study is the first to investigate an interaction between neuropsychology and language among Mandarin-speaking schizophrenia patients. As predicted, patients were highly impaired in both tone and auditory word processing, with these two measures significantly correlated. Tonally impaired patients showed significantly worse employment-status function than tonally intact patients, suggesting a link between sensory impairment and employment status outcome. While neuropsychological deficits appear similar cross-culturally, their consequences may be language- and culture-dependent.
Collapse
Affiliation(s)
- L Yang
- Columbia University Mailman School of Public Health, New York, NY, USA.
| | | | | | | | | | | |
Collapse
|
112
|
Abstract
The brain's ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least 1 week after cessation of training. In the current study, we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding likely representing the substrate for a multisensory temporal binding window.
Collapse
|
113
|
The neural network sustaining crossmodal integration is impaired in alcohol-dependence: an fMRI study. Cortex 2012; 49:1610-26. [PMID: 22658706 DOI: 10.1016/j.cortex.2012.04.012] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2011] [Revised: 03/22/2012] [Accepted: 04/27/2012] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Crossmodality (i.e., the integration of stimulations coming from different sensory modalities) is a crucial ability in everyday life and has been extensively explored in healthy adults. Still, it has not yet received much attention in psychiatry, and particularly in alcohol-dependence. The present study investigates the cerebral correlates of crossmodal integration deficits in alcohol-dependence to assess whether these deficits are due to the mere accumulation of unimodal impairments or rather to specific alterations in crossmodal areas. METHODS Twenty-eight subjects [14 alcohol-dependent subjects (ADS), 14 paired controls] were scanned using fMRI while performing a categorization task on faces (F), voices (V) and face-voice pairs (FV). A subtraction contrast [FV-(F+V)] and a conjunction analysis [(FV-F) ∩ (FV-V)] isolated the brain areas specifically involved in crossmodal face-voice integration. The functional connectivity between unimodal and crossmodal areas was explored using psycho-physiological interactions (PPI). RESULTS ADS presented only moderate alterations during unimodal processing. More centrally, in the subtraction contrast and conjunction analysis, they did not show any specific crossmodal brain activation while controls presented activations in specific crossmodal areas (inferior occipital gyrus, middle frontal gyrus, superior parietal lobule). Moreover, PPI analyses showed reduced connectivity between unimodal and crossmodal areas in alcohol-dependence. CONCLUSIONS This first fMRI exploration of crossmodal processing in alcohol-dependence showed a specific face-voice integration deficit indexed by reduced activation of crossmodal areas and reduced connectivity in the crossmodal integration network. Using crossmodal paradigms is thus crucial to correctly evaluate the deficits presented by ADS in real-life situations.
Collapse
|
114
|
Abstract
The brain's ability to bind incoming auditory and visual stimuli depends critically on the temporal structure of this information. Specifically, there exists a temporal window of audiovisual integration within which stimuli are highly likely to be perceived as part of the same environmental event. Several studies have described the temporal bounds of this window, but few have investigated its malleability. Recently, our laboratory has demonstrated that a perceptual training paradigm is capable of eliciting a 40% narrowing in the width of this window that is stable for at least 1 week after cessation of training. In the current study, we sought to reveal the neural substrates of these changes. Eleven human subjects completed an audiovisual simultaneity judgment training paradigm, immediately before and after which they performed the same task during an event-related 3T fMRI session. The posterior superior temporal sulcus (pSTS) and areas of auditory and visual cortex exhibited robust BOLD decreases following training, and resting state and effective connectivity analyses revealed significant increases in coupling among these cortices after training. These results provide the first evidence of the neural correlates underlying changes in multisensory temporal binding likely representing the substrate for a multisensory temporal binding window.
Collapse
|
115
|
Stevenson RA, Fister JK, Barnett ZP, Nidiffer AR, Wallace MT. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Exp Brain Res 2012; 219:121-37. [PMID: 22447249 DOI: 10.1007/s00221-012-3072-1] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2011] [Accepted: 03/06/2012] [Indexed: 12/19/2022]
Abstract
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | | | | | | | | |
Collapse
|
116
|
Landry S, Bacon BA, Leybaert J, Gagné JP, Champoux F. Audiovisual segregation in cochlear implant users. PLoS One 2012; 7:e33113. [PMID: 22427963 PMCID: PMC3299746 DOI: 10.1371/journal.pone.0033113] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2011] [Accepted: 02/10/2012] [Indexed: 11/18/2022] Open
Abstract
It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.
Collapse
Affiliation(s)
- Simon Landry
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Montréal, Québec, Canada
| | - Benoit A. Bacon
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Montréal, Québec, Canada
- Department of Psychology, Bishop's University, Sherbrooke, Québec, Canada
| | | | - Jean-Pierre Gagné
- Centre de recherche interdisciplinaire en réadaptation du Montréal métropolitain, Institut Raymond-Dewar, École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada
| | - François Champoux
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Montréal, Québec, Canada
- Centre de recherche interdisciplinaire en réadaptation du Montréal métropolitain, Institut Raymond-Dewar, École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Québec, Canada
- * E-mail:
| |
Collapse
|
117
|
Masking of speech in people with first-episode schizophrenia and people with chronic schizophrenia. Schizophr Res 2012; 134:33-41. [PMID: 22019075 DOI: 10.1016/j.schres.2011.09.019] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2011] [Revised: 09/17/2011] [Accepted: 09/18/2011] [Indexed: 11/20/2022]
Abstract
In "cocktail-party" environments, although listeners feel it difficult to recognize attended speech due to both energetic masking and informational masking, they can use various perceptual/cognitive cues, such as content and voice primes, to facilitate their attention to target speech. In patients with schizophrenia, both speech-perception deficits and increased vulnerability to masking stimuli generally occur. This study investigated whether speech recognition in first-episode patients (FEPs) and chronic patients (CPs) of schizophrenia is more vulnerable to noise masking and/or speech masking than that in demographics-matched-healthy controls, and whether patients with schizophrenia can use primes to unmask speech. In a trial under the priming condition, before the target sentence containing three keywords was co-presented with a noise or speech masker, the prime (early part of the sentence including the first two keywords) was recited in quiet with the target-speaker's voice. The results show that in patients, target-speech recognition was more impaired under speech-masking conditions than noise-masking conditions, and the impairment in CPs (n=22) was larger than that in FEPs (n=12). Although working memory for holding prime-content information in patients, especially CPs, was more vulnerable to masking, especially speech masking, than that in healthy controls, patients were still able to use the prime to unmask the last keyword. Thus, in "cocktail-party" environments, speech recognition in people with schizophrenia is more vulnerable to masking, particularly informational masking, and the speech-recognition impairment augments as the illness progresses. However, people with schizophrenia can use the content/voice prime to reduce energetic masking and informational masking of target speech.
Collapse
|
118
|
Van den Stock J, de Jong SJ, Hodiamont PPG, de Gelder B. Perceiving emotions from bodily expressions and multisensory integration of emotion cues in schizophrenia. Soc Neurosci 2011; 6:537-47. [DOI: 10.1080/17470919.2011.568790] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
119
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/9781439812174-10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
120
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/b11092-10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
121
|
Foxe JJ, Yeap S, Snyder AC, Kelly SP, Thakore JH, Molholm S. The N1 auditory evoked potential component as an endophenotype for schizophrenia: high-density electrical mapping in clinically unaffected first-degree relatives, first-episode, and chronic schizophrenia patients. Eur Arch Psychiatry Clin Neurosci 2011; 261:331-9. [PMID: 21153832 PMCID: PMC3119740 DOI: 10.1007/s00406-010-0176-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2009] [Accepted: 11/23/2010] [Indexed: 12/19/2022]
Abstract
The N1 component of the auditory evoked potential (AEP) is a robust and easily recorded metric of auditory sensory-perceptual processing. In patients with schizophrenia, a diminution in the amplitude of this component is a near-ubiquitous finding. A pair of recent studies has also shown this N1 deficit in first-degree relatives of schizophrenia probands, suggesting that the deficit may be linked to the underlying genetic risk of the disease rather than to the disease state itself. However, in both these studies, a significant proportion of the relatives had other psychiatric conditions. As such, although the N1 deficit represents an intriguing candidate endophenotype for schizophrenia, it remains to be shown whether it is present in a group of clinically unaffected first-degree relatives. In addition to testing first-degree relatives, we also sought to replicate the N1 deficit in a group of first-episode patients and in a group of chronic schizophrenia probands. Subject groups consisted of 35 patients with schizophrenia, 30 unaffected first-degree relatives, 13 first-episode patients, and 22 healthy controls. Subjects sat in a dimly lit room and listened to a series of simple 1,000-Hz tones, indicating with a button press whenever they heard a deviant tone (1,500 Hz; 17% probability), while the AEP was recorded from 72 scalp electrodes. Both chronic and first-episode patients showed clear N1 amplitude decrements relative to healthy control subjects. Crucially, unaffected first-degree relatives also showed a clear N1 deficit. This study provides further support for the proposal that the auditory N1 deficit in schizophrenia is linked to the underlying genetic risk of developing this disorder. In light of recent studies, these results point to the N1 deficit as an endophenotypic marker for schizophrenia. The potential future utility of this metric as one element of a multivariate endophenotype is discussed.
Collapse
Affiliation(s)
- John J Foxe
- The Cognitive Neurophysiology Laboratory, Nathan S. Kline Institute for Psychiatric Research, Program in Cognitive Neuroscience and Schizophrenia, Orangeburg, NY 10962, USA.
| | | | | | | | | | | |
Collapse
|
122
|
Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study. Neuropsychologia 2011; 49:3178-87. [PMID: 21807011 DOI: 10.1016/j.neuropsychologia.2011.07.017] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2011] [Revised: 06/24/2011] [Accepted: 07/15/2011] [Indexed: 11/20/2022]
Abstract
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.
Collapse
|
123
|
Krishnan RR, Kraus MS, Keefe RSE. Comprehensive model of how reality distortion and symptoms occur in schizophrenia: could impairment in learning-dependent predictive perception account for the manifestations of schizophrenia? Psychiatry Clin Neurosci 2011; 65:305-17. [PMID: 21447049 DOI: 10.1111/j.1440-1819.2011.02203.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Conventional wisdom has not laid out a clear and uniform profile of schizophrenia as a unitary entity. One of the key first steps in elucidating the neurobiology of this entity would be to characterize the essential and common elements in the group of entities called schizophrenia. Kraepelin in his introduction notes 'the conviction seems to be more and more gaining ground that dementia praecox on the whole represents, a well characterized form of disease, and that we are justified in regarding the majority of the clinical pictures which are brought together here as the expression of a single morbid process, though outwardly they often diverge very far from one another'. But what is that single morbid process? We suggest that just as the uniform defect in all types of cancer is impaired regulation of cell proliferation, the primary defect in the group of entities called schizophrenia is persistent defective hierarchical temporal processing. This manifests in the form of chronic memory-prediction errors or deficits in learning-dependent predictive perception. These deficits account for the symptoms that present as reality distortion (delusions, thought disorder and hallucinations). This constellation of symptoms corresponds with the profile of most patients currently diagnosed as suffering from schizophrenia. In this paper we describe how these deficits can lead to the various symptoms of schizophrenia.
Collapse
Affiliation(s)
- Ranga R Krishnan
- Department of Psychiatry and Behavioral Sciences, Duke University Medical Center, North Carolina 27710, USA
| | | | | |
Collapse
|
124
|
Ross LA, Molholm S, Blanco D, Gomez-Ramirez M, Saint-Amour D, Foxe JJ. The development of multisensory speech perception continues into the late childhood years. THE EUROPEAN JOURNAL OF NEUROSCIENCE 2011. [PMID: 21615556 DOI: 10.1111/j.1460–9568.2011.07685.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.
Collapse
Affiliation(s)
- Lars A Ross
- The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
| | | | | | | | | | | |
Collapse
|
125
|
Ross LA, Molholm S, Blanco D, Gomez-Ramirez M, Saint-Amour D, Foxe JJ. The development of multisensory speech perception continues into the late childhood years. Eur J Neurosci 2011; 33:2329-37. [PMID: 21615556 DOI: 10.1111/j.1460-9568.2011.07685.x] [Citation(s) in RCA: 95] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted.
Collapse
Affiliation(s)
- Lars A Ross
- The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine, Bronx, New York 10461, USA.
| | | | | | | | | | | |
Collapse
|
126
|
Russo N, Foxe JJ, Brandwein AB, Altschuler T, Gomes H, Molholm S. Multisensory processing in children with autism: high-density electrical mapping of auditory-somatosensory integration. Autism Res 2011; 3:253-67. [PMID: 20730775 DOI: 10.1002/aur.152] [Citation(s) in RCA: 97] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Successful integration of signals from the various sensory systems is crucial for normal sensory-perceptual functioning, allowing for the perception of coherent objects rather than a disconnected cluster of fragmented features. Several prominent theories of autism suggest that automatic integration is impaired in this population, but there have been few empirical tests of this thesis. A standard electrophysiological metric of multisensory integration (MSI) was used to test the integrity of auditory-somatosensory integration in children with autism (N=17, aged 6-16 years), compared to age- and IQ-matched typically developing (TD) children. High-density electrophysiology was recorded while participants were presented with either auditory or somatosensory stimuli alone (unisensory conditions), or as a combined auditory-somatosensory stimulus (multisensory condition), in randomized order. Participants watched a silent movie during testing, ignoring concurrent stimulation. Significant differences between neural responses to the multisensory auditory-somatosensory stimulus and the unisensory stimuli (the sum of the responses to the auditory and somatosensory stimuli when presented alone) served as the dependent measure. The data revealed group differences in the integration of auditory and somatosensory information that appeared at around 175 ms, and were characterized by the presence of MSI for the TD but not the autism spectrum disorder (ASD) children. Overall, MSI was less extensive in the ASD group. These findings are discussed within the framework of current knowledge of MSI in typical development as well as in relation to theories of ASD.
Collapse
Affiliation(s)
- Natalie Russo
- City College of New York, The Children's Research Unit, Program in Cognitive Neuroscience, Departments of Psychology & Biology, New York, USA
| | | | | | | | | | | |
Collapse
|
127
|
Stevenson RA, VanDerKlok RM, Pisoni DB, James TW. Discrete neural substrates underlie complementary audiovisual speech integration processes. Neuroimage 2010; 55:1339-45. [PMID: 21195198 DOI: 10.1016/j.neuroimage.2010.12.063] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Revised: 12/14/2010] [Accepted: 12/23/2010] [Indexed: 11/25/2022] Open
Abstract
The ability to combine information from multiple sensory modalities into a single, unified percept is a key element in an organism's ability to interact with the external world. This process of perceptual fusion, the binding of multiple sensory inputs into a perceptual gestalt, is highly dependent on the temporal synchrony of the sensory inputs. Using fMRI, we identified two anatomically distinct brain regions in the superior temporal cortex, one involved with processing temporal-synchrony, and one with processing perceptual fusion of audiovisual speech. This dissociation suggests that the superior temporal cortex should be considered a "neuronal hub" composed of multiple discrete subregions that underlie an array of complementary low- and high-level multisensory integration processes. In this role, abnormalities in the structure and function of superior temporal cortex provide a possible common etiology for temporal-processing and perceptual-fusion deficits seen in a number of clinical populations, including individuals with autism spectrum disorder, dyslexia, and schizophrenia.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychological and Brain Sciences, Indiana University, USA.
| | | | | | | |
Collapse
|
128
|
Cappe C, Murray MM, Barone P, Rouiller EM. Multisensory facilitation of behavior in monkeys: effects of stimulus intensity. J Cogn Neurosci 2010; 22:2850-63. [PMID: 20044892 DOI: 10.1162/jocn.2010.21423] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory-visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30-40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.
Collapse
Affiliation(s)
- Céline Cappe
- Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland.
| | | | | | | |
Collapse
|
129
|
Smith PH, Manning KA, Uhlrich DJ. Evaluation of inputs to rat primary auditory cortex from the suprageniculate nucleus and extrastriate visual cortex. J Comp Neurol 2010; 518:3679-700. [PMID: 20653029 DOI: 10.1002/cne.22411] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Evidence indicates that visual stimuli influence cells in the primary auditory cortex. To evaluate potential sources of this visual input and how they enter into the circuitry of the auditory cortex, we examined axonal terminations in the primary auditory cortex from nonprimary extrastriate visual cortex (V2M, V2L) and from the multimodal thalamic suprageniculate nucleus (SG). Gross biocytin/biotinylated dextran amine (BDA) injections into the SG or extrastriate cortex labeled inputs terminating primarily in superficial and deep layers. SG projects primarily to layers I, V, and VI while V2M and V2L project primarily to layers I and VI, with V2L also targeting layers II/III. Layer I inputs differ in that SG terminals are concentrated superficially, V2L are deeper, and V2M are equally distributed throughout. Individual axonal reconstructions document that single axons can 1) innervate multiple layers; 2) run considerable distances in layer I; and 3) run preferentially in the dorsoventral direction similar to isofrequency axes. At the electron microscopic level, SG and V2M terminals 1) are the same size regardless of layer; 2) are non-gamma-aminobutyric acid (GABA)ergic; 3) are smaller than ventral medial geniculate terminals synapsing in layer IV; 4) make asymmetric synapses onto dendrites/spines that 5) are non-GABAergic and 6) are slightly larger in layer I. Thus, both areas provide a substantial feedback-like input with differences that may indicate potentially different roles.
Collapse
Affiliation(s)
- Philip H Smith
- Department of Anatomy, University of Wisconsin Medical School, Madison, Wisconsin 53705, USA.
| | | | | |
Collapse
|
130
|
Brandwein AB, Foxe JJ, Russo NN, Altschuler TS, Gomes H, Molholm S. The development of audiovisual multisensory integration across childhood and early adolescence: a high-density electrical mapping study. ACTA ACUST UNITED AC 2010; 21:1042-55. [PMID: 20847153 DOI: 10.1093/cercor/bhq170] [Citation(s) in RCA: 110] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The integration of multisensory information is essential to forming meaningful representations of the environment. Adults benefit from related multisensory stimuli but the extent to which the ability to optimally integrate multisensory inputs for functional purposes is present in children has not been extensively examined. Using a cross-sectional approach, high-density electrical mapping of event-related potentials (ERPs) was combined with behavioral measures to characterize neurodevelopmental changes in basic audiovisual (AV) integration from middle childhood through early adulthood. The data indicated a gradual fine-tuning of multisensory facilitation of performance on an AV simple reaction time task (as indexed by race model violation), which reaches mature levels by about 14 years of age. They also revealed a systematic relationship between age and the brain processes underlying multisensory integration (MSI) in the time frame of the auditory N1 ERP component (∼ 120 ms). A significant positive correlation between behavioral and neurophysiological measures of MSI suggested that the underlying brain processes contributed to the fine-tuning of multisensory facilitation of behavior that was observed over middle childhood. These findings are consistent with protracted plasticity in a dynamic system and provide a starting point from which future studies can begin to examine the developmental course of multisensory processing in clinical populations.
Collapse
Affiliation(s)
- Alice B Brandwein
- The Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | | | | | | | | | | |
Collapse
|
131
|
Talsma D, Senkowski D, Soto-Faraco S, Woldorff MG. The multifaceted interplay between attention and multisensory integration. Trends Cogn Sci 2010; 14:400-10. [PMID: 20675182 DOI: 10.1016/j.tics.2010.06.008] [Citation(s) in RCA: 484] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Revised: 06/24/2010] [Accepted: 06/25/2010] [Indexed: 11/18/2022]
Abstract
Multisensory integration has often been characterized as an automatic process. Recent findings indicate that multisensory integration can occur across various stages of stimulus processing that are linked to, and can be modulated by, attention. Stimulus-driven, bottom-up mechanisms induced by crossmodal interactions can automatically capture attention towards multisensory events, particularly when competition to focus elsewhere is relatively low. Conversely, top-down attention can facilitate the integration of multisensory inputs and lead to a spread of attention across sensory modalities. These findings point to a more intimate and multifaceted interplay between attention and multisensory integration than was previously thought. We review developments in the current understanding of the interactions between attention and multisensory processing, and propose a framework that unifies previous, apparently discordant, findings.
Collapse
Affiliation(s)
- Durk Talsma
- Department of Cognitive Psychology and Ergonomics, University of Twente, P.O. Box 215, 7500 AE Enschede, The Netherlands.
| | | | | | | |
Collapse
|
132
|
Williams LE, Ramachandran VS, Hubbard EM, Braff DL, Light GA. Superior size-weight illusion performance in patients with schizophrenia: evidence for deficits in forward models. Schizophr Res 2010; 121:101-6. [PMID: 19931421 PMCID: PMC2910228 DOI: 10.1016/j.schres.2009.10.021] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/09/2009] [Revised: 10/13/2009] [Accepted: 10/19/2009] [Indexed: 10/20/2022]
Abstract
When non-psychiatric individuals compare the weights of two similar objects of identical mass, but of different sizes, the smaller object is often perceived as substantially heavier. This size-weight illusion (SWI) is thought to be generated by a violation of the common expectation that the large object will be heavier, possibly via a mismatch between an efference copy of the movement and the actual sensory feedback received. As previous research suggests that patients with schizophrenia have deficits in forward model/efference copy mechanisms, we hypothesized that schizophrenic patients would show a reduced SWI. The current study compared the strength of the SWI in schizophrenic patients to matched non-psychiatric participants; weight discrimination for same-sized objects was also assessed. We found a reduced SWI for schizophrenic patients, which resulted in better (more veridical) weight discrimination performance on illusion trials compared to non-psychiatric individuals. This difference in the strength of the SWI persisted when groups were matched for weight discrimination performance. The current findings are consistent with a dysfunctional forward model mechanism in this population. Future studies to elucidate the locus of this impairment using variations on the current study are also proposed.
Collapse
Affiliation(s)
- Lisa E. Williams
- Department of Psychology, University of California San Diego. 9500 Gilman Drive #0109, La Jolla, CA 92093-0109, USA, Center for Brain and Cognition, University of California San Diego, 9500 Gilman Drive #0109, La Jolla, CA 92093-0109, USA
| | - Vilayanur S. Ramachandran
- Department of Psychology, University of California San Diego. 9500 Gilman Drive #0109, La Jolla, CA 92093-0109, USA, Center for Brain and Cognition, University of California San Diego, 9500 Gilman Drive #0109, La Jolla, CA 92093-0109, USA
| | - Edward M. Hubbard
- Department of Psychology and Human Development, Vanderbilt University, Peabody College #552, 230 Appleton Place, Nashville, TN 37203-5721, USA
| | - David L. Braff
- Department of Psychiatry, University of California San Diego. 9500 Gilman Drive La Jolla, CA 92093-0804C, USA
| | - Gregory A. Light
- Department of Psychiatry, University of California San Diego. 9500 Gilman Drive La Jolla, CA 92093-0804C, USA
| |
Collapse
|
133
|
Williams LE, Light GA, Braff DL, Ramachandran VS. Reduced multisensory integration in patients with schizophrenia on a target detection task. Neuropsychologia 2010; 48:3128-36. [PMID: 20600181 DOI: 10.1016/j.neuropsychologia.2010.06.028] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2009] [Revised: 06/15/2010] [Accepted: 06/18/2010] [Indexed: 11/25/2022]
Abstract
A growing body of literature demonstrates impaired multisensory integration (MSI) in patients with schizophrenia compared to non-psychiatric individuals. One of the most basic measures of MSI is intersensory facilitation of reaction times (RTs), in which bimodal targets, with cues from two sensory modalities, are detected faster than unimodal targets. This RT speeding is generally attributed to super-additive processing of multisensory targets. In order to test whether patients with schizophrenia are impaired on this basic measure of MSI, we assessed the degree of intersensory facilitation for a sample of 20 patients compared to 20 non-psychiatric individuals using a very simple target detection task. RTs were recorded for participants to detect targets that were either unimodal (auditory alone, A; visual alone, V) or bimodal (auditory+visual, AV). RT distributions to detect bimodal targets were compared with predicted RT distributions based on the summed probability distribution of each participant's RTs to visual alone and auditory alone targets. Patients with schizophrenia showed less RT facilitation when detecting bimodal targets relative to non-psychiatric individuals, even when groups were matched for unimodal RTs. Within the schizophrenia group, RT benefit was correlated with negative symptoms, such that patients with greater negative symptoms showed the least RT facilitation (r(2)=0.20, p<0.05). Additionally, schizophrenia patients who experienced both auditory and visual hallucinations showed less multisensory benefit compared to patients who experienced only auditory hallucinations, indicating that the presence of hallucinations in two modalities may more strongly impair MSI compared to hallucinations in only one modality.
Collapse
Affiliation(s)
- Lisa E Williams
- Department of Psychology, University of California, San Diego, CA, United States.
| | | | | | | |
Collapse
|
134
|
John AE, Mervis CB. Sensory modulation impairments in children with Williams syndrome. AMERICAN JOURNAL OF MEDICAL GENETICS. PART C, SEMINARS IN MEDICAL GENETICS 2010; 154C:266-76. [PMID: 20425786 PMCID: PMC2997471 DOI: 10.1002/ajmg.c.30260] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The ability to organize information detected by our senses ("sensory modulation") allows us to act or respond effectively to situations encountered, facilitating learning, social behavior, and day-to-day functioning. We hypothesized that children with Williams syndrome (WS) would demonstrate symptoms of poor sensory modulation and that these sensory modulation abnormalities contribute to the phenotype. Participants were 78 children with WS aged 4.00-10.95 years. Based on parent ratings on the Short Sensory Profile [SSP; Dunn, 1999], most children were classified as having definite sensory modulation issues. Cluster analysis identified the presence of two clusters varying in level of sensory modulation impairment. Children in the high impairment group demonstrated poorer adaptive functioning, executive functioning, more problem behaviors, and more difficult temperaments than children in the low impairment group.
Collapse
Affiliation(s)
- Angela E John
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY 40292, USA.
| | | |
Collapse
|
135
|
Sperdin HF, Cappe C, Murray MM. The behavioral relevance of multisensory neural response interactions. Front Neurosci 2010; 4:9. [PMID: 20582260 PMCID: PMC2891631 DOI: 10.3389/neuro.01.009.2010] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2009] [Accepted: 12/04/2009] [Indexed: 11/24/2022] Open
Abstract
Sensory information can interact to impact perception and behavior. Foods are appreciated according to their appearance, smell, taste and texture. Athletes and dancers combine visual, auditory, and somatosensory information to coordinate their movements. Under laboratory settings, detection and discrimination are likewise facilitated by multisensory signals. Research over the past several decades has shown that the requisite anatomy exists to support interactions between sensory systems in regions canonically designated as exclusively unisensory in their function and, more recently, that neural response interactions occur within these same regions, including even primary cortices and thalamic nuclei, at early post-stimulus latencies. Here, we review evidence concerning direct links between early, low-level neural response interactions and behavioral measures of multisensory integration.
Collapse
Affiliation(s)
- Holger F. Sperdin
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of LausanneLausanne, Switzerland
| | - Céline Cappe
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of LausanneLausanne, Switzerland
| | - Micah M. Murray
- The Functional Electrical Neuroimaging Laboratory, Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of LausanneLausanne, Switzerland
- The Electroencephalography Brain Mapping Core, Centre for Biomedical ImagingLausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical CenterNashville, TN, USA
| |
Collapse
|
136
|
Cross-modal facilitation in speech prosody. Cognition 2009; 115:71-8. [PMID: 20015487 DOI: 10.1016/j.cognition.2009.11.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2009] [Revised: 10/16/2009] [Accepted: 11/17/2009] [Indexed: 11/24/2022]
Abstract
Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker pronouncing two words, with visual features of emphasis on one of these words. For each trial, participants saw one video where the two words were identical in both pitch and amplitude, and another video where there was a difference in either pitch or amplitude that was congruent or incongruent with the visual changes. Participants were asked to decide which video contained the sound difference. Thresholds were obtained for the congruent and incongruent videos, and for an auditory-alone condition. It was found that the congruent thresholds were better than the incongruent thresholds for both pitch and amplitude changes. Interestingly, the congruent thresholds for amplitude were better than for the auditory-alone condition, which implies that the visual features improve sensitivity to loudness changes. These results demonstrate that visual stimuli can affect auditory thresholds for changes in pitch and amplitude, and furthermore support the view that visual prosodic features enhance speech processing.
Collapse
|
137
|
Murray MM, Spierer L. Auditory spatio-temporal brain dynamics and their consequences for multisensory interactions in humans. Hear Res 2009; 258:121-33. [DOI: 10.1016/j.heares.2009.04.022] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2009] [Revised: 04/28/2009] [Accepted: 04/28/2009] [Indexed: 11/24/2022]
|
138
|
|
139
|
Musacchia G, Schroeder CE. Neuronal mechanisms, response dynamics and perceptual functions of multisensory interactions in auditory cortex. Hear Res 2009; 258:72-9. [PMID: 19595755 DOI: 10.1016/j.heares.2009.06.018] [Citation(s) in RCA: 86] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2009] [Revised: 06/24/2009] [Accepted: 06/25/2009] [Indexed: 11/16/2022]
Abstract
Most auditory events in nature are accompanied by non-auditory signals, such as a view of the speaker's face during face-to-face communication or the vibration of a string during a musical performance. While it is known that accompanying visual and somatosensory signals can benefit auditory perception, often by making the sound seem louder, the specific neural bases for sensory amplification are still debated. In this review, we want to deal with what we regard as confusion on two topics that are crucial to our understanding of multisensory integration mechanisms in auditory cortex: (1) Anatomical Underpinnings (e.g., what circuits underlie multisensory convergence), and (2) Temporal Dynamics (e.g., what time windows of integration are physiologically feasible). The combined evidence on multisensory structure and function in auditory cortex advances the emerging view of the relationship between perception and low level multisensory integration. In fact, it seems that the question is no longer whether low level, putatively unisensory cortex is accessible to multisensory influences, but how.
Collapse
Affiliation(s)
- Gabriella Musacchia
- Cognitive Neuroscience & Neuroimaging Laboratory, Cognitive Neuroscience and Schizophrenia Program, Nathan S. Kline Institute for Psychiatric Research, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA.
| | | |
Collapse
|
140
|
Talsma D, Senkowski D, Woldorff MG. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli. Exp Brain Res 2009; 198:313-28. [PMID: 19495733 PMCID: PMC2733193 DOI: 10.1007/s00221-009-1858-6] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2008] [Accepted: 05/12/2009] [Indexed: 11/29/2022]
Abstract
The temporal asynchrony between inputs to different sensory modalities has been shown to be a critical factor influencing the interaction between such inputs. We used scalp-recorded event-related potentials (ERPs) to investigate the effects of attention on the processing of audiovisual multisensory stimuli as the temporal asynchrony between the auditory and visual inputs varied across the audiovisual integration window (i.e., up to 125 ms). Randomized streams of unisensory auditory stimuli, unisensory visual stimuli, and audiovisual stimuli (consisting of the temporally proximal presentation of the visual and auditory stimulus components) were presented centrally while participants attended to either the auditory or the visual modality to detect occasional target stimuli in that modality. ERPs elicited by each of the contributing sensory modalities were extracted by signal processing techniques from the combined ERP waveforms elicited by the multisensory stimuli. This was done for each of the five different 50-ms subranges of stimulus onset asynchrony (SOA: e.g., V precedes A by 125–75 ms, by 75–25 ms, etc.). The extracted ERPs for the visual inputs of the multisensory stimuli were compared among each other and with the ERPs to the unisensory visual control stimuli, separately when attention was directed to the visual or to the auditory modality. The results showed that the attention effects on the right-hemisphere visual P1 was largest when auditory and visual stimuli were temporally aligned. In contrast, the N1 attention effect was smallest at this latency, suggesting that attention may play a role in the processing of the relative temporal alignment of the constituent parts of multisensory stimuli. At longer latencies an occipital selection negativity for the attended versus unattended visual stimuli was also observed, but this effect did not vary as a function of SOA, suggesting that by that latency a stable representation of the auditory and visual stimulus components has been established.
Collapse
Affiliation(s)
- Durk Talsma
- Cognitive Psychology Department, Vrije University Amsterdam, Amsterdam, The Netherlands.
| | | | | |
Collapse
|
141
|
Magnée MJCM, Oranje B, van Engeland H, Kahn RS, Kemner C. Cross-sensory gating in schizophrenia and autism spectrum disorder: EEG evidence for impaired brain connectivity? Neuropsychologia 2009; 47:1728-32. [PMID: 19397868 DOI: 10.1016/j.neuropsychologia.2009.02.012] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2008] [Revised: 12/23/2008] [Accepted: 02/06/2009] [Indexed: 11/19/2022]
Abstract
Autism spectrum disorders (ASD) and schizophrenia are both neurodevelopmental disorders that have extensively been associated with impairments in functional brain connectivity. Using a cross-sensory P50 suppression paradigm, this study investigated low-level audiovisual interactions on cortical EEG activation, which provides crucial information about functional integrity of connections between brain areas involved in cross-sensory processing in both disorders. Thirteen high functioning adult males with ASD, 13 high functioning adult males with schizophrenia, and 16 healthy adult males participated in the study. No differences in neither auditory nor cross-sensory P50 suppression were found between healthy controls and individuals with ASD. In schizophrenia, attenuated P50 responses to the first auditory stimulus indicated early auditory processing deficits. These results are in accordance with the notion that filtering deficits may be secondary to earlier sensory dysfunction. Also, atypical cross-sensory suppression was found, which implies that the cognitive impairments seen in schizophrenia may be due to deficits in the integrity of connections between brain areas involved in low-level cross-sensory processing.
Collapse
Affiliation(s)
- Maurice J C M Magnée
- Rudolf Magnus Institute of Neuroscience, Department of Child and Adolescent Psychiatry, University Medical Center, Utrecht, The Netherlands.
| | | | | | | | | |
Collapse
|
142
|
de Jong JJ, Hodiamont PPG, Van den Stock J, de Gelder B. Audiovisual emotion recognition in schizophrenia: reduced integration of facial and vocal affect. Schizophr Res 2009; 107:286-93. [PMID: 18986799 DOI: 10.1016/j.schres.2008.10.001] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2008] [Revised: 09/29/2008] [Accepted: 10/02/2008] [Indexed: 11/29/2022]
Abstract
Since Kraepelin called dementia praecox what we nowadays call schizophrenia, cognitive dysfunction has been regarded as central to its psychopathological profile. Disturbed experience and integration of emotions are, both intuitively and experimentally, likely to be intermediates between basic, non-social cognitive disturbances and functional outcome in schizophrenia. While a number of studies have consistently proven that, as part of social cognition, recognition of emotional faces and voices is disturbed in schizophrenics, studies on multisensory integration of facial and vocal affect are rare. We investigated audiovisual integration of emotional faces and voices in three groups: schizophrenic patients, non-schizophrenic psychosis patients and mentally healthy controls, all diagnosed by means of the Schedules of Clinical Assessment in Neuropsychiatry (SCAN 2.1). We found diminished crossmodal influence of emotional faces on emotional voice categorization in schizophrenics, but not in non-schizophrenia psychosis patients. Results are discussed in the perspective of recent theories on multisensory integration.
Collapse
Affiliation(s)
- J J de Jong
- Cognitive Neuroscience Laboratory, Department of Developmental, Clinical and Cross-cultural Psychology, Tilburg University, P.O. Box 90153, 5000 LE, The Netherlands
| | | | | | | |
Collapse
|
143
|
Senkowski D, Schneider TR, Foxe JJ, Engel AK. Crossmodal binding through neural coherence: implications for multisensory processing. Trends Neurosci 2008; 31:401-9. [PMID: 18602171 DOI: 10.1016/j.tins.2008.05.002] [Citation(s) in RCA: 265] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2007] [Revised: 05/06/2008] [Accepted: 05/06/2008] [Indexed: 11/18/2022]
Affiliation(s)
- Daniel Senkowski
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | | | | | | |
Collapse
|
144
|
Senkowski D, Saint-Amour D, Gruber T, Foxe JJ. Look who's talking: the deployment of visuo-spatial attention during multisensory speech processing under noisy environmental conditions. Neuroimage 2008; 43:379-87. [PMID: 18678262 DOI: 10.1016/j.neuroimage.2008.06.046] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2008] [Revised: 05/30/2008] [Accepted: 06/30/2008] [Indexed: 10/21/2022] Open
Abstract
In a crowded scene we can effectively focus our attention on a specific speaker while largely ignoring sensory inputs from other speakers. How attended speech inputs are extracted from similar competing information has been primarily studied in the auditory domain. Here we examined the deployment of visuo-spatial attention in multiple speaker scenarios. Steady-state visual evoked potentials (SSVEP) were monitored as a real-time index of visual attention towards three competing speakers. Participants were instructed to detect a target syllable by the center speaker and ignore syllables from two flanking speakers. The study incorporated interference trials (syllables from three speakers), no-interference trials (syllable from center speaker only), and periods without speech stimulation in which static faces were presented. An enhancement of flanking speaker induced SSVEP was found 70-220 ms after sound onset over left temporal scalp during interference trials. This enhancement was negatively correlated with the behavioral performance of participants -- those who showed largest enhancements had the worst speech recognition performance. Additionally, poorly performing participants exhibited enhanced flanking speaker induced SSVEP over visual scalp during periods without speech stimulation. The present study provides neurophysiologic evidence that the deployment of visuo-spatial attention to flanking speakers interferes with the recognition of multisensory speech signals under noisy environmental conditions.
Collapse
Affiliation(s)
- Daniel Senkowski
- The Cognitive Neurophysiology Laboratory, Program in Cognitive, Neuroscience and Schizophrenia, Nathan S Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA.
| | | | | | | |
Collapse
|