1
|
Taylor JJ, Lin C, Talmasov D, Ferguson MA, Schaper FLWVJ, Jiang J, Goodkind M, Grafman J, Etkin A, Siddiqi SH, Fox MD. A transdiagnostic network for psychiatric illness derived from atrophy and lesions. Nat Hum Behav 2023; 7:420-429. [PMID: 36635585 PMCID: PMC10236501 DOI: 10.1038/s41562-022-01501-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 11/23/2022] [Indexed: 01/13/2023]
Abstract
Psychiatric disorders share neurobiology and frequently co-occur. This neurobiological and clinical overlap highlights opportunities for transdiagnostic treatments. In this study, we used coordinate and lesion network mapping to test for a shared brain network across psychiatric disorders. In our meta-analysis of 193 studies, atrophy coordinates across six psychiatric disorders mapped to a common brain network defined by positive connectivity to anterior cingulate and insula, and by negative connectivity to posterior parietal and lateral occipital cortex. This network was robust to leave-one-diagnosis-out cross-validation and specific to atrophy coordinates from psychiatric versus neurodegenerative disorders (72 studies). In 194 patients with penetrating head trauma, lesion damage to this network correlated with the number of post-lesion psychiatric diagnoses. Neurosurgical ablation targets for psychiatric illness (four targets) also aligned with the network. This convergent brain network for psychiatric illness may partially explain high rates of psychiatric comorbidity and could highlight neuromodulation targets for patients with more than one psychiatric disorder.
Collapse
Affiliation(s)
- Joseph J Taylor
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Christopher Lin
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Daniel Talmasov
- Departments of Neurology and Psychiatry, Columbia University Medical Center, Columbia University College of Physicians and Surgeons, New York, NY, USA
| | - Michael A Ferguson
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Center for the Study of World Religions, Harvard Divinity School, Cambridge, MA, USA
| | - Frederic L W V J Schaper
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jing Jiang
- Stead Family Department of Pediatrics, University of Iowa Carver College of Medicine, Iowa City, IA, USA
- Iowa Neuroscience Institute, University of Iowa Carver College of Medicine, Iowa City, IA, USA
| | - Madeleine Goodkind
- Departments of Psychiatry and Behavioral Sciences, University of New Mexico, Albuquerque, NM, USA
- New Mexico Veterans Affairs Healthcare System, Albuquerque, NM, USA
| | - Jordan Grafman
- Departments of Physical Medicine and Rehabilitation, Neurology, & Psychiatry, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Amit Etkin
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA
- Wu Tsai Neurosciences Institute at Stanford, Stanford University School of Medicine, Stanford, CA, USA
- Alto Neuroscience, Los Altos, CA, USA
| | - Shan H Siddiqi
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Michael D Fox
- Center for Brain Circuit Therapeutics, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
Yang W, Guo A, Yao H, Yang X, Li Z, Li S, Chen J, Ren Y, Yang J, Wu J, Zhang Z. Effect of aging on audiovisual integration: Comparison of high- and low-intensity conditions in a speech discrimination task. Front Aging Neurosci 2022; 14:1010060. [DOI: 10.3389/fnagi.2022.1010060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 10/11/2022] [Indexed: 11/13/2022] Open
Abstract
Audiovisual integration is an essential process that influences speech perception in conversation. However, it is still debated whether older individuals benefit more from audiovisual integration than younger individuals. This ambiguity is likely due to stimulus features, such as stimulus intensity. The purpose of the current study was to explore the effect of aging on audiovisual integration, using event-related potentials (ERPs) at different stimulus intensities. The results showed greater audiovisual integration in older adults at 320–360 ms. Conversely, at 460–500 ms, older adults displayed attenuated audiovisual integration in the frontal, fronto-central, central, and centro-parietal regions compared to younger adults. In addition, we found older adults had greater audiovisual integration at 200–230 ms under the low-intensity condition compared to the high-intensity condition, suggesting inverse effectiveness occurred. However, inverse effectiveness was not found in younger adults. Taken together, the results suggested that there was age-related dissociation in audiovisual integration and inverse effectiveness, indicating that the neural mechanisms underlying audiovisual integration differed between older adults and younger adults.
Collapse
|
3
|
|
4
|
Diaz MT, Yalcinbas E. The neural bases of multimodal sensory integration in older adults. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2021; 45:409-417. [PMID: 34650316 DOI: 10.1177/0165025420979362] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although hearing often declines with age, prior research has shown that older adults may benefit from multisensory input to a greater extent when compared to younger adults, a concept known as inverse effectiveness. While there is behavioral evidence in support of this phenomenon, less is known about its neural basis. The present fMRI study examined how older and younger adults processed multimodal auditory-visual (AV) phonemic stimuli which were either congruent or incongruent across modalities. Incongruent AV pairs were designed to elicit the McGurk effect. Behaviorally, reaction times were significantly faster during congruent trials compared to incongruent trials for both age groups, and overall older adults responded more slowly. The interaction was not significant suggesting that older adults processed the AV stimuli similarly to younger adults. Although there were minimal behavioral differences, age-related differences in functional activation were identified: Younger adults elicited greater activation than older adults in primary sensory regions including superior temporal gyrus, the calcarine fissure, and left post-central gyrus. In contrast, older adults elicited greater activation than younger adults in dorsal frontal regions including middle and superior frontal gyri, as well as dorsal parietal regions. These data suggest that while there is age-related stability in behavioral sensitivity to multimodal stimuli, the neural bases for this effect differed between older and younger adults. Our results demonstrated that older adults underrecruited primary sensory cortices and had increased recruitment of regions involved in executive function, attention, and monitoring processes, which may reflect an attempt to compensate.
Collapse
Affiliation(s)
- Michele T Diaz
- Department of Psychology, The Pennsylvania State University
| | - Ege Yalcinbas
- Neurosciences Department, University of California, San Diego
| |
Collapse
|
5
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
6
|
Wang A, Payne C, Moss S, Jones WR, Bachevalier J. Early developmental changes in visual social engagement in infant rhesus monkeys. Dev Cogn Neurosci 2020; 43:100778. [PMID: 32510341 PMCID: PMC7271941 DOI: 10.1016/j.dcn.2020.100778] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 02/20/2020] [Accepted: 03/23/2020] [Indexed: 11/16/2022] Open
Abstract
Impairments in social interaction in Autism Spectrum Disorder (ASD) differ greatly across individuals and vary throughout an individual’s lifetime. Yet, an important marker of ASD in infancy is deviations in social-visual engagement, such as the reliably detectable early deviations in attention to the eyes or to biological movement (Klin et al., 2015). Given the critical nature of these early developmental periods, understanding its neurobehavioral underpinnings by means of a nonhuman primate model will be instrumental to understanding the pathophysiology of ASD. Like humans, rhesus macaques 1) develop in rich and complex social behaviors, 2) progressively develop social skills throughout infancy, and 3) have high similarities with humans in brain anatomy and cognitive functions (Machado and Bachevalier, 2003). In this study, male infant rhesus macaques living with their mothers in complex social groups were eye-tracked longitudinally from birth to 6 months while viewing full-faced videos of unfamiliar rhesus monkeys differing in age and sex. The results indicated a critical period for the refinement of social skills around 4–8 weeks of age in rhesus macaques. Specifically, infant monkeys’ fixation to the eyes shows an inflection in developmental trajectory, increasing from birth to 8 weeks, decreasing slowly to a trough between 14–18 weeks, before increasing again. These results parallel the developmental trajectory of social visual engagement published in human infants (Jones & Klin, 2013) and suggest the presence of a switch in the critical networks supporting these early developing social skills that is highly conserved between rhesus macaque and human infant development.
Collapse
Affiliation(s)
- Arick Wang
- Yerkes National Primate Research Ctr., Emory University, Atlanta, GA, 30329, United States; Dept. of Psychology, Emory University, Atlanta, GA, 30322, United States.
| | - Christa Payne
- Yerkes National Primate Research Ctr., Emory University, Atlanta, GA, 30329, United States
| | - Shannon Moss
- Yerkes National Primate Research Ctr., Emory University, Atlanta, GA, 30329, United States
| | - Warren R Jones
- Dept. of Pediatrics, Emory University School of Medicine, Atlanta, GA, 30322, United States; Marcus Autism Center, Atlanta, GA, 30329, United States
| | - Jocelyne Bachevalier
- Yerkes National Primate Research Ctr., Emory University, Atlanta, GA, 30329, United States; Dept. of Psychology, Emory University, Atlanta, GA, 30322, United States
| |
Collapse
|
7
|
Dollack F, Perusquía-Hernández M, Kadone H, Suzuki K. Head Anticipation During Locomotion With Auditory Instruction in the Presence and Absence of Visual Input. Front Hum Neurosci 2019; 13:293. [PMID: 31555112 PMCID: PMC6724718 DOI: 10.3389/fnhum.2019.00293] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 08/12/2019] [Indexed: 11/13/2022] Open
Abstract
Head direction has been identified to anticipate trajectory direction during human locomotion. Head anticipation has also been shown to persist in darkness. Arguably, the purpose for this anticipatory behavior is related to motor control and trajectory planning, independently of the visual condition. This implies that anticipation remains in the absence of visual input. However, experiments so far have only explored this phenomenon with visual instructions which intrinsically primes a visual representation to follow. The primary objective of this study is to describe head anticipation in auditory instructed locomotion, in the presence and absence of visual input. Auditory instructed locomotion trajectories were performed in two visual conditions: eyes open and eyes closed. First, 10 sighted participants localized static sound sources to ensure they could understand the sound cues provided. Afterwards, they listened to a moving sound source while actively following it. Later, participants were asked to reproduce the trajectory of the moving sound source without sound. Anticipatory head behavior was observed during trajectory reproduction in both eyes open and closed conditions. The results suggest that head anticipation is related to motor anticipation rather than mental simulation of the trajectory.
Collapse
Affiliation(s)
- Felix Dollack
- School of Integrative and Global Majors, University of Tsukuba, Tsukuba, Japan.,Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Japan
| | | | - Hideki Kadone
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Japan.,Center for Innovative Medicine and Engineering, University of Tsukuba Hospital, Tsukuba, Japan.,Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan
| | - Kenji Suzuki
- Artificial Intelligence Laboratory, University of Tsukuba, Tsukuba, Japan.,Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan.,Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
8
|
Noel JP, Serino A, Wallace MT. Increased Neural Strength and Reliability to Audiovisual Stimuli at the Boundary of Peripersonal Space. J Cogn Neurosci 2019; 31:1155-1172. [DOI: 10.1162/jocn_a_01334] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit—an approximation of PPS—while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus–observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary—an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.
Collapse
Affiliation(s)
| | - Andrea Serino
- University of Lausanne
- Ecole Polytechnique Federale de Lausanne
| | | |
Collapse
|
9
|
Adult dyslexic readers benefit less from visual input during audiovisual speech processing: fMRI evidence. Neuropsychologia 2018; 117:454-471. [DOI: 10.1016/j.neuropsychologia.2018.07.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Revised: 06/13/2018] [Accepted: 07/06/2018] [Indexed: 11/19/2022]
|
10
|
Laing M, Rees A, Vuong QC. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity. Front Psychol 2015; 6:1440. [PMID: 26483710 PMCID: PMC4591484 DOI: 10.3389/fpsyg.2015.01440] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 09/09/2015] [Indexed: 11/13/2022] Open
Abstract
The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.
Collapse
Affiliation(s)
- Mark Laing
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Quoc C Vuong
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| |
Collapse
|
11
|
Abstract
In spatial perception, visual information has higher acuity than auditory information and we often misperceive sound-source locations when spatially disparate visual stimuli are presented simultaneously. Ventriloquists make good use of this auditory illusion. In this study, we investigated neural substrates of the ventriloquism effect to understand the neural mechanism of multimodal integration. This study was performed in 2 steps. First, we investigated how sound locations were represented in the auditory cortex. Secondly, we investigated how simultaneous presentation of spatially disparate visual stimuli affects neural processing of sound locations. Based on the population rate code hypothesis that assumes monotonic sensitivity to sound azimuth across populations of broadly tuned neurons, we expected a monotonic increase of blood oxygenation level-dependent (BOLD) signals for more contralateral sounds. Consistent with this hypothesis, we found that BOLD signals in the posterior superior temporal gyrus increased monotonically as a function of sound azimuth. We also observed attenuation of the monotonic azimuthal sensitivity by spatially disparate visual stimuli. The alteration of the neural pattern was considered to reflect the neural mechanism of the ventriloquism effect. Our findings indicate that conflicting audiovisual spatial information of an event is associated with an attenuation of neural processing of auditory spatial localization.
Collapse
Affiliation(s)
- Akiko Callan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| | - Daniel Callan
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| | - Hiroshi Ando
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, Osaka University, Suita, Osaka 565-0871, Japan
| |
Collapse
|
12
|
Yang W, Li Q, Ochi T, Yang J, Gao Y, Tang X, Takahashi S, Wu J. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study. PLoS One 2013; 8:e66402. [PMID: 23799097 PMCID: PMC3684583 DOI: 10.1371/journal.pone.0066402] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2012] [Accepted: 05/06/2013] [Indexed: 11/19/2022] Open
Abstract
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Collapse
Affiliation(s)
- Weiping Yang
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Tatsuya Ochi
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Jingjing Yang
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Yulin Gao
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Xiaoyu Tang
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Satoshi Takahashi
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
- * E-mail:
| |
Collapse
|
13
|
Fridriksson J, Hubbard HI, Hudspeth SG, Holland AL, Bonilha L, Fromm D, Rorden C. Speech entrainment enables patients with Broca's aphasia to produce fluent speech. Brain 2012; 135:3815-29. [PMID: 23250889 PMCID: PMC3525061 DOI: 10.1093/brain/aws301] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2012] [Revised: 09/17/2012] [Accepted: 09/24/2012] [Indexed: 12/29/2022] Open
Abstract
A distinguishing feature of Broca's aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect 'speech entrainment' and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca's aphasia. In Experiment 1, 13 patients with Broca's aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca's area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and functional magnetic resonance imaging data were collected before and after the treatment phase. Patients were able to produce a greater variety of words with and without speech entrainment at 1 and 6 weeks after training. Treatment-related decrease in cortical activation associated with speech entrainment was found in areas of the left posterior-inferior parietal lobe. We conclude that speech entrainment allows patients with Broca's aphasia to double their speech output compared with spontaneous speech. Neuroimaging results suggest that speech entrainment allows patients to produce fluent speech by providing an external gating mechanism that yokes a ventral language network that encodes conceptual aspects of speech. Preliminary results suggest that training with speech entrainment improves speech production in Broca's aphasia providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment.
Collapse
Affiliation(s)
- Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA.
| | | | | | | | | | | | | |
Collapse
|
14
|
Speech comprehension aided by multiple modalities: behavioural and neural interactions. Neuropsychologia 2012; 50:762-76. [PMID: 22266262 DOI: 10.1016/j.neuropsychologia.2012.01.010] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2011] [Revised: 12/30/2011] [Accepted: 01/08/2012] [Indexed: 11/24/2022]
Abstract
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.
Collapse
|