1
|
Snir A, Cieśla K, Vekslar R, Amedi A. Highly compromised auditory spatial perception in aided congenitally hearing-impaired and rapid improvement with tactile technology. iScience 2024; 27:110808. [PMID: 39290844 PMCID: PMC11407022 DOI: 10.1016/j.isci.2024.110808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/11/2024] [Accepted: 08/21/2024] [Indexed: 09/19/2024] Open
Abstract
Spatial understanding is a multisensory construct while hearing is the only natural sense enabling the simultaneous perception of the entire 3D space. To test whether such spatial understanding is dependent on auditory experience, we study congenitally hearing-impaired users of assistive devices. We apply an in-house technology, which, inspired by the auditory system, performs intensity-weighting to represent external spatial positions and motion on the fingertips. We see highly impaired auditory spatial capabilities for tracking moving sources, which based on the "critical periods" theory emphasizes the role of nature in sensory development. Meanwhile, for tactile and audio-tactile spatial motion perception, the hearing-impaired show performance similar to typically hearing individuals. The immediate availability of 360° external space representation through touch, despite the lack of such experience during the lifetime, points to the significant role of nurture in spatial perception development, and to its amodal character. The findings show promise toward advancing multisensory solutions for rehabilitation.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8 Herzliya 461010, Israel
| |
Collapse
|
2
|
Karunanayaka PR, Lu J, Elyan R, Yang QX, Sathian K. Olfactory-trigeminal integration in the primary olfactory cortex. Hum Brain Mapp 2024; 45:e26772. [PMID: 38962966 PMCID: PMC11222875 DOI: 10.1002/hbm.26772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 06/07/2024] [Accepted: 06/16/2024] [Indexed: 07/05/2024] Open
Abstract
Humans naturally integrate signals from the olfactory and intranasal trigeminal systems. A tight interplay has been demonstrated between these two systems, and yet the neural circuitry mediating olfactory-trigeminal (OT) integration remains poorly understood. Using functional magnetic resonance imaging (fMRI), combined with psychophysics, this study investigated the neural mechanisms underlying OT integration. Fifteen participants with normal olfactory function performed a localization task with air-puff stimuli, phenylethyl alcohol (PEA; rose odor), or a combination thereof while being scanned. The ability to localize PEA to either nostril was at chance. Yet, its presence significantly improved the localization accuracy of weak, but not strong, air-puffs, when both stimuli were delivered concurrently to the same nostril, but not when different nostrils received the two stimuli. This enhancement in localization accuracy, exemplifying the principles of spatial coincidence and inverse effectiveness in multisensory integration, was associated with multisensory integrative activity in the primary olfactory (POC), orbitofrontal (OFC), superior temporal (STC), inferior parietal (IPC) and cingulate cortices, and in the cerebellum. Multisensory enhancement in most of these regions correlated with behavioral multisensory enhancement, as did increases in connectivity between some of these regions. We interpret these findings as indicating that the POC is part of a distributed brain network mediating integration between the olfactory and trigeminal systems. PRACTITIONER POINTS: Psychophysical and neuroimaging study of olfactory-trigeminal (OT) integration. Behavior, cortical activity, and network connectivity show OT integration. OT integration obeys principles of inverse effectiveness and spatial coincidence. Behavioral and neural measures of OT integration are correlated.
Collapse
Affiliation(s)
- Prasanna R. Karunanayaka
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of Neural and Behavioral SciencesPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of Public Health SciencesPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
| | - Jiaming Lu
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Drum Tower HospitalMedical School of Nanjing UniversityNanjingChina
| | - Rommy Elyan
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
| | - Qing X. Yang
- Department of RadiologyPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of NeurosurgeryPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
| | - K. Sathian
- Department of Neural and Behavioral SciencesPennsylvania State University College of MedicineHersheyPennsylvaniaUSA
- Department of NeurologyPenn State Health Milton S. Hershey Medical CenterHersheyPennsylvaniaUSA
- Department of PsychologyPennsylvania State University College of Liberal ArtsState CollegePennsylvaniaUSA
| |
Collapse
|
3
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
4
|
Zou T, Li L, Huang X, Deng C, Wang X, Gao Q, Chen H, Li R. Dynamic causal modeling analysis reveals the modulation of motor cortex and integration in superior temporal gyrus during multisensory speech perception. Cogn Neurodyn 2024; 18:931-946. [PMID: 38826672 PMCID: PMC11143173 DOI: 10.1007/s11571-023-09945-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 02/03/2023] [Accepted: 02/10/2023] [Indexed: 03/06/2023] Open
Abstract
The processing of speech information from various sensory modalities is crucial for human communication. Both left posterior superior temporal gyrus (pSTG) and motor cortex importantly involve in the multisensory speech perception. However, the dynamic integration of primary sensory regions to pSTG and the motor cortex remain unclear. Here, we implemented a behavioral experiment of classical McGurk effect paradigm and acquired the task functional magnetic resonance imaging (fMRI) data during synchronized audiovisual syllabic perception from 63 normal adults. We conducted dynamic causal modeling (DCM) analysis to explore the cross-modal interactions among the left pSTG, left precentral gyrus (PrG), left middle superior temporal gyrus (mSTG), and left fusiform gyrus (FuG). Bayesian model selection favored a winning model that included modulations of connections to PrG (mSTG → PrG, FuG → PrG), from PrG (PrG → mSTG, PrG → FuG), and to pSTG (mSTG → pSTG, FuG → pSTG). Moreover, the coupling strength of the above connections correlated with behavioral McGurk susceptibility. In addition, significant differences were found in the coupling strength of these connections between strong and weak McGurk perceivers. Strong perceivers modulated less inhibitory visual influence, allowed less excitatory auditory information flowing into PrG, but integrated more audiovisual information in pSTG. Taken together, our findings show that the PrG and pSTG interact dynamically with primary cortices during audiovisual speech, and support the motor cortex plays a specifically functional role in modulating the gain and salience between auditory and visual modalities. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-023-09945-z.
Collapse
Affiliation(s)
- Ting Zou
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Liyuan Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Xinju Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Chijun Deng
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Xuyang Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Qing Gao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Huafu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Rong Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| |
Collapse
|
5
|
Jafari Z, Kolb BE, Mohajerani MH. A systematic review of altered resting-state networks in early deafness and implications for cochlear implantation outcomes. Eur J Neurosci 2024; 59:2596-2615. [PMID: 38441248 DOI: 10.1111/ejn.16295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 05/22/2024]
Abstract
Auditory deprivation following congenital/pre-lingual deafness (C/PD) can drastically affect brain development and its functional organisation. This systematic review intends to extend current knowledge of the impact of C/PD and deafness duration on brain resting-state networks (RSNs), review changes in RSNs and spoken language outcomes post-cochlear implant (CI) and draw conclusions for future research. The systematic literature search followed the PRISMA guideline. Two independent reviewers searched four electronic databases using combined keywords: 'auditory deprivation', 'congenital/prelingual deafness', 'resting-state functional connectivity' (RSFC), 'resting-state fMRI' and 'cochlear implant'. Seventeen studies (16 cross-sectional and one longitudinal) met the inclusion criteria. Using the Crowe Critical Appraisal Tool, the publications' quality was rated between 65.0% and 92.5% (mean: 84.10%), ≥80% in 13 out of 17 studies. A few studies were deficient in sampling and/or ethical considerations. According to the findings, early auditory deprivation results in enhanced RSFC between the auditory network and brain networks involved in non-verbal communication, and high levels of spontaneous neural activity in the auditory cortex before CI are evidence of occupied auditory cortical areas with other sensory modalities (cross-modal plasticity) and sub-optimal CI outcomes. Overall, current evidence supports the idea that moreover intramodal and cross-modal plasticity, the entire brain adaptation following auditory deprivation contributes to spoken language development and compensatory behaviours.
Collapse
Affiliation(s)
- Zahra Jafari
- School of Communication Sciences and Disorders (SCSD), Dalhousie University, Halifax, Nova Scotia, Canada
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
- Douglas Research Centre, Department of Psychiatry, McGill University, Montreal, Québec, Canada
| |
Collapse
|
6
|
Oberman LM, Francis SM, Beynel L, Hynd M, Jaime M, Robins PL, Deng ZD, Stout J, van der Veen JW, Lisanby SH. Design and methodology for a proof of mechanism study of individualized neuronavigated continuous Theta burst stimulation for auditory processing in adolescents with autism spectrum disorder. Front Psychiatry 2024; 15:1304528. [PMID: 38389984 PMCID: PMC10881663 DOI: 10.3389/fpsyt.2024.1304528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
It has been suggested that aberrant excitation/inhibition (E/I) balance and dysfunctional structure and function of relevant brain networks may underlie the symptoms of autism spectrum disorder (ASD). However, the nomological network linking these constructs to quantifiable measures and mechanistically relating these constructs to behavioral symptoms of ASD is lacking. Herein we describe a within-subject, controlled, proof-of-mechanism study investigating the pathophysiology of auditory/language processing in adolescents with ASD. We utilize neurophysiological and neuroimaging techniques including magnetic resonance spectroscopy (MRS), diffusion-weighted imaging (DWI), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG) metrics of language network structure and function. Additionally, we apply a single, individually targeted session of continuous theta burst stimulation (cTBS) as an experimental probe of the impact of perturbation of the system on these neurophysiological and neuroimaging outcomes. MRS, fMRI, and MEG measures are evaluated at baseline and immediately prior to and following cTBS over the posterior superior temporal cortex (pSTC), a region involved in auditory and language processing deficits in ASD. Also, behavioral measures of ASD and language processing and DWI measures of auditory/language network structures are obtained at baseline to characterize the relationship between the neuroimaging and neurophysiological measures and baseline symptom presentation. We hypothesize that local gamma-aminobutyric acid (GABA) and glutamate concentrations (measured with MRS), and structural and functional activity and network connectivity (measured with DWI and fMRI), will significantly predict MEG indices of auditory/language processing and behavioral deficits in ASD. Furthermore, a single session of cTBS over left pSTC is hypothesized to lead to significant, acute changes in local glutamate and GABA concentration, functional activity and network connectivity, and MEG indices of auditory/language processing. We have completed the pilot phase of the study (n=20 Healthy Volunteer adults) and have begun enrollment for the main phase with adolescents with ASD (n=86; age 14-17). If successful, this study will establish a nomological network linking local E/I balance measures to functional and structural connectivity within relevant brain networks, ultimately connecting them to ASD symptoms. Furthermore, this study will inform future therapeutic trials using cTBS to treat the symptoms of ASD.
Collapse
Affiliation(s)
- Lindsay M Oberman
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Sunday M Francis
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Lysianne Beynel
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Megan Hynd
- Clinical Affective Neuroscience Laboratory, Department of Psychology & Neuroscience, University of North Carolina, Chapel Hill, NC, United States
| | - Miguel Jaime
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Pei L Robins
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Zhi-De Deng
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Jeff Stout
- Magnetoencephalography Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Jan Willem van der Veen
- Magnetic Resonance Spectroscopy Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| | - Sarah H Lisanby
- Noninvasive Neuromodulation Unit, Experimental Therapeutics and Pathophysiology Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
7
|
Whitehead JC, Spiousas I, Armony JL. Individual differences in the evaluation of ambiguous visual and auditory threat-related expressions. Eur J Neurosci 2024; 59:370-393. [PMID: 38185821 DOI: 10.1111/ejn.16220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/29/2023] [Accepted: 11/22/2023] [Indexed: 01/09/2024]
Abstract
This study investigated the neural correlates of the judgement of auditory and visual ambiguous threat-related information, and the influence of state anxiety on this process. Healthy subjects were scanned using a fast, high-resolution functional magnetic resonance imaging (fMRI) multiband sequence while they performed a two-alternative forced-choice emotion judgement task on faces and vocal utterances conveying explicit anger or fear, as well as ambiguous ones. Critically, the latter was specific to each subject, obtained through a morphing procedure and selected prior to scanning following a perceptual decision-making task. Behavioural results confirmed a greater task-difficulty for subject-specific ambiguous stimuli and also revealed a judgement bias for visual fear, and, to a lesser extent, for auditory anger. Imaging results showed increased activity in regions of the salience and frontoparietal control networks (FPCNs) and deactivation in areas of the default mode network for ambiguous, relative to explicit, expressions. In contrast, the right amygdala (AMG) responded more strongly to explicit stimuli. Interestingly, its response to the same ambiguous stimulus depended on the subjective judgement of the expression. Finally, we found that behavioural and neural differences between ambiguous and explicit expressions decreased as a function of state anxiety scores. Taken together, our results show that behavioural and brain responses to emotional expressions are determined not only by emotional clarity but also modality and the subjects' subjective perception of the emotion expressed, and that some of these responses are modulated by state anxiety levels.
Collapse
Affiliation(s)
- Jocelyne C Whitehead
- Human Neuroscience, Douglas Mental Health University Institute, Verdun, Quebec, Canada
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Integrated Program in Neuroscience, McGill University, Montreal, Quebec, Canada
| | - Ignacio Spiousas
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Laboratorio Interdisciplinario del Tiempo y la Experiencia (LITERA), CONICET, Universidad de San Andrés, Victoria, Argentina
| | - Jorge L Armony
- Human Neuroscience, Douglas Mental Health University Institute, Verdun, Quebec, Canada
- BRAMS Laboratory, Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada
- Laboratorio Interdisciplinario del Tiempo y la Experiencia (LITERA), CONICET, Universidad de San Andrés, Victoria, Argentina
- Department of Psychiatry, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
8
|
Răutu IS, De Tiège X, Jousmäki V, Bourguignon M, Bertels J. Speech-derived haptic stimulation enhances speech recognition in a multi-talker background. Sci Rep 2023; 13:16621. [PMID: 37789043 PMCID: PMC10547762 DOI: 10.1038/s41598-023-43644-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/26/2023] [Indexed: 10/05/2023] Open
Abstract
Speech understanding, while effortless in quiet conditions, is challenging in noisy environments. Previous studies have revealed that a feasible approach to supplement speech-in-noise (SiN) perception consists in presenting speech-derived signals as haptic input. In the current study, we investigated whether the presentation of a vibrotactile signal derived from the speech temporal envelope can improve SiN intelligibility in a multi-talker background for untrained, normal-hearing listeners. We also determined if vibrotactile sensitivity, evaluated using vibrotactile detection thresholds, modulates the extent of audio-tactile SiN improvement. In practice, we measured participants' speech recognition in a multi-talker noise without (audio-only) and with (audio-tactile) concurrent vibrotactile stimulation delivered in three schemes: to the left or right palm, or to both. Averaged across the three stimulation delivery schemes, the vibrotactile stimulation led to a significant improvement of 0.41 dB in SiN recognition when compared to the audio-only condition. Notably, there were no significant differences observed between the improvements in these delivery schemes. In addition, audio-tactile SiN benefit was significantly predicted by participants' vibrotactile threshold levels and unimodal (audio-only) SiN performance. The extent of the improvement afforded by speech-envelope-derived vibrotactile stimulation was in line with previously uncovered vibrotactile enhancements of SiN perception in untrained listeners with no known hearing impairment. Overall, these results highlight the potential of concurrent vibrotactile stimulation to improve SiN recognition, especially in individuals with poor SiN perception abilities, and tentatively more so with increasing tactile sensitivity. Moreover, they lend support to the multimodal accounts of speech perception and research on tactile speech aid devices.
Collapse
Affiliation(s)
- I Sabina Răutu
- Laboratoire de Neuroanatomie et de Neuroimagerie Translationnelles (LN2T), UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium.
| | - Xavier De Tiège
- Laboratoire de Neuroanatomie et de Neuroimagerie Translationnelles (LN2T), UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
- Service de Neuroimagerie Translationnelle, Hôpital Universitaire de Bruxelles (H.U.B.), CUB Hôpital Erasme, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | | | - Mathieu Bourguignon
- Laboratoire de Neuroanatomie et de Neuroimagerie Translationnelles (LN2T), UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, 20009, San Sebastián, Spain
- Laboratory of Neurophysiology and Movement Biomechanics, UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Julie Bertels
- Laboratoire de Neuroanatomie et de Neuroimagerie Translationnelles (LN2T), UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium.
- ULBabylab, Center for Research in Cognition and Neurosciences (CRCN), UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), Brussels, Belgium.
| |
Collapse
|
9
|
Beck J, Dzięgiel-Fivet G, Jednoróg K. Similarities and differences in the neural correlates of letter and speech sound integration in blind and sighted readers. Neuroimage 2023; 278:120296. [PMID: 37495199 DOI: 10.1016/j.neuroimage.2023.120296] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/18/2023] [Accepted: 07/23/2023] [Indexed: 07/28/2023] Open
Abstract
Learning letter and speech sound (LS) associations is a major step in reading acquisition common for all alphabetic scripts, including Braille used by blind readers. The left superior temporal cortex (STC) plays an important role in audiovisual LS integration in sighted people, but it is still unknown what neural mechanisms are responsible for audiotactile LS integration in blind individuals. Here, we investigated the similarities and differences between LS integration in blind Braille (N = 42, age range: 9-60 y.o.) and sighted print (N = 47, age range: 9-60 y.o.) readers who acquired reading using different sensory modalities. In both groups, the STC responded to both isolated letters and isolated speech sounds, showed enhanced activation when they were presented together, and distinguished between congruent and incongruent letter and speech sound pairs. However, the direction of the congruency effect was different between the groups. Sighted subjects showed higher activity for incongruent LS pairs in the bilateral STC, similarly to previously studied typical readers of transparent orthographies. In the blind, congruent pairs resulted in an increased response in the right STC. These differences may be related to more sequential processing of Braille as compared to print reading. At the same time, behavioral efficiency in LS discrimination decisions and the congruency effect were found to be related to age and reading skill only in sighted participants, suggesting potential differences in the developmental trajectories of LS integration between blind and sighted readers.
Collapse
Affiliation(s)
- Joanna Beck
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| | - Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3, Warsaw 02-093, Poland.
| |
Collapse
|
10
|
Silfwerbrand L, Ogata Y, Yoshimura N, Koike Y, Gingnell M. An fMRI-study of leading and following using rhythmic tapping. Soc Neurosci 2023; 17:558-567. [PMID: 36891876 DOI: 10.1080/17470919.2023.2189615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Leading and following is about synchronizing and joining actions in accordance with the differences that the leader and follower roles provide. The neural reactivity representing these roles was measured in an explorative fMRI-study, where two persons lead and followed each other in finger tapping using simple, individual, pre-learnt rhythms. All participants acted both as leader and follower. Neural reactivity for both lead and follow related to social awareness and adaptation distributed over the lateral STG, STS and TPJ. Reactivity for follow contrasted with lead mostly reflected sensorimotor and rhythmic processing in cerebellum IV, V, somatosensory cortex and SMA. During leading, as opposed to following, neural reactivity was observed in the insula and bilaterally in the superior temporal gyrus, pointing toward empathy, sharing of feelings, temporal coding and social engagement. Areas for continuous adaptation, in the posterior cerebellum and Rolandic operculum, were activated during both leading and following. This study indicated mutual adaptation of leader and follower during tapping and that the roles gave rise to largely similar neuronal reactivity. The differences between the roles indicated that leading was more socially focused and following had more motoric- and temporally related neural reactivity.
Collapse
Affiliation(s)
- Lykke Silfwerbrand
- Department of Medical sciences, Psychiatry, Akademiska Sjukhuset, Uppsala, Sweden.,Institute of Innovative Research, Tokyo Institute of Technology, Midori-ku, Yokohama, Japan
| | - Yousuke Ogata
- Institute of Innovative Research, Tokyo Institute of Technology, Midori-ku, Yokohama, Japan
| | - Natsue Yoshimura
- Institute of Innovative Research, Tokyo Institute of Technology, Midori-ku, Yokohama, Japan
| | - Yasuharu Koike
- Institute of Innovative Research, Tokyo Institute of Technology, Midori-ku, Yokohama, Japan
| | - Malin Gingnell
- Department of Medical sciences, Psychiatry, Akademiska Sjukhuset, Uppsala, Sweden.,Department of Psychology, Emotion Psychology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
11
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
12
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
13
|
Holmes S, Mar'i J, Simons LE, Zurakowski D, LeBel AA, O'Brien M, Borsook D. Integrated Features for Optimizing Machine Learning Classifiers of Pediatric and Young Adults With a Post-Traumatic Headache From Healthy Controls. FRONTIERS IN PAIN RESEARCH 2022; 3:859881. [PMID: 35655747 PMCID: PMC9152124 DOI: 10.3389/fpain.2022.859881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 03/09/2022] [Indexed: 11/14/2022] Open
Abstract
Post-traumatic headache (PTH) is a challenging clinical condition to identify and treat as it integrates multiple subjectively defined symptoms with underlying physiological processes. The precise mechanisms underlying PTH are unclear, and it remains to be understood how to integrate the patient experience with underlying biology when attempting to classify persons with PTH, particularly in the pediatric setting where patient self-report may be highly variable. The objective of this investigation was to evaluate the use of different machine learning (ML) classifiers to differentiate pediatric and young adult subjects with PTH from healthy controls using behavioral data from self-report questionnaires that reflect concussion symptoms, mental health, pain experience of the participants, and structural brain imaging from cortical and sub-cortical locations. Behavioral data, alongside brain imaging, survived data reduction methods and both contributed toward final models. Behavioral data that contributed towards the final model included both the child and parent perspective of the pain-experience. Brain imaging features produced two unique clusters that reflect regions that were previously found in mild traumatic brain injury (mTBI) and PTH. Affinity-based propagation analysis demonstrated that behavioral data remained independent relative to neuroimaging data that suggest there is a role for both behavioral and brain imaging data when attempting to classify children with PTH.
Collapse
Affiliation(s)
- Scott Holmes
- Pediatric Pain Pathway Lab, Department of Anesthesia, Critical Care, and Pain Medicine, Boston Children's Hospital – Harvard Medical School, Boston, MA, United States
- Pain and Affective Neuroscience Center, Boston Children's Hospital, Boston, MA, United States
- *Correspondence: Scott Holmes
| | - Joud Mar'i
- Pediatric Pain Pathway Lab, Department of Anesthesia, Critical Care, and Pain Medicine, Boston Children's Hospital – Harvard Medical School, Boston, MA, United States
| | - Laura E. Simons
- Department of Anesthesiology, Perioperative, and Pain Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - David Zurakowski
- Department of Anesthesia, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
| | - Alyssa Ann LeBel
- Department of Anesthesia, Critical Care, and Pain Medicine, Boston Children's Hospital, Boston, MA, United States
| | - Michael O'Brien
- Sports Medicine Division, Sports Concussion Clinic, Orthopedic Surgery, Harvard Medical School, Boston, MA, United States
| | - David Borsook
- Departments of Psychiatry ad Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
14
|
Ito T, Ogane R. Repetitive Exposure to Orofacial Somatosensory Inputs in Speech Perceptual Training Modulates Vowel Categorization in Speech Perception. Front Psychol 2022; 13:839087. [PMID: 35558689 PMCID: PMC9088678 DOI: 10.3389/fpsyg.2022.839087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 03/24/2022] [Indexed: 11/24/2022] Open
Abstract
Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.
Collapse
Affiliation(s)
- Takayuki Ito
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, CT, United States
| | - Rintaro Ogane
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
- Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
15
|
Tomasino B, Del Negro I, Garbo R, Gigli GL, D'Agostini S, Valente MR. Multisensory mental imagery of fatigue: Evidence from an fMRI study. Hum Brain Mapp 2022; 43:3143-3152. [PMID: 35315967 PMCID: PMC9189079 DOI: 10.1002/hbm.25839] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 03/08/2022] [Indexed: 11/30/2022] Open
Abstract
Functional imaging experimental designs measuring fatigue, defined as a subjective lack of physical and/or mental energy characterizing a wide range of neurologic conditions, are still under development. Nineteen right‐handed healthy subjects (9 M and 10 F, mean age 43.15 ± 8.34 years) were evaluated by means of functional magnetic resonance imaging (fMRI), asking them to perform explicit, first‐person, mental imagery of fatigue‐related multisensory sensations. Short sentences designed to assess the principal manifestations of fatigue from the Multidimensional Fatigue Symptom Inventory were presented. Participants were asked to imagine the corresponding sensations (Sensory Imagery, SI). As a control, they had to imagine the visual scenes (Visual Imagery, VI) described in short phrases. The SI task (vs. VI task) differentially activated three areas: (i) the precuneus, which is involved in first‐person perspective taking; (ii) the left superior temporal sulcus, which is a multisensory integration area; and (iii) the left inferior frontal gyrus, known to be involved in mental imagery network. The SI fMRI task can be used to measure processing involved in mental imagery of fatigue‐related multisensory sensations.
Collapse
Affiliation(s)
- Barbara Tomasino
- Scientific Institute IRCCS "Eugenio Medea", Polo FVG, Pasian di Prato (UD), Italy
| | - Ilaria Del Negro
- Clinical Neurology, Azienda Sanitaria Universitaria Friuli Centrale, Presidio Ospedaliero Santa Maria della Misericordia, Udine, Italy
| | - Riccardo Garbo
- Clinical Neurology, Azienda Sanitaria Universitaria Friuli Centrale, Presidio Ospedaliero Santa Maria della Misericordia, Udine, Italy
| | - Gian Luigi Gigli
- Clinical Neurology, Azienda Sanitaria Universitaria Friuli Centrale, Presidio Ospedaliero Santa Maria della Misericordia, Udine, Italy.,Neurology Unit, Department of Medicine (DAME), University of Udine, Udine, Italy
| | - Serena D'Agostini
- Neuroradiology, Azienda Sanitaria Universitaria Friuli Centrale, Presidio Ospedaliero Santa Maria della Misericordia, Udine, Italy
| | - Maria Rosaria Valente
- Clinical Neurology, Azienda Sanitaria Universitaria Friuli Centrale, Presidio Ospedaliero Santa Maria della Misericordia, Udine, Italy.,Neurology Unit, Department of Medicine (DAME), University of Udine, Udine, Italy
| |
Collapse
|
16
|
Kothare H, Schneider S, Mizuiri D, Hinkley L, Bhutada A, Ranasinghe K, Honma S, Garrett C, Klein D, Naunheim M, Yung K, Cheung S, Rosen C, Courey M, Nagarajan S, Houde J. Temporal specificity of abnormal neural oscillations during phonatory events in laryngeal dystonia. Brain Commun 2022; 4:fcac031. [PMID: 35356032 PMCID: PMC8962453 DOI: 10.1093/braincomms/fcac031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 01/03/2022] [Accepted: 02/09/2022] [Indexed: 11/25/2022] Open
Abstract
Laryngeal dystonia is a debilitating disorder of voicing in which the laryngeal muscles are intermittently in spasm resulting in involuntary interruptions during speech. The central pathophysiology of laryngeal dystonia, underlying computational impairments in vocal motor control, remains poorly understood. Although prior imaging studies have found aberrant activity in the CNS during phonation in patients with laryngeal dystonia, it is not known at what timepoints during phonation these abnormalities emerge and what function may be impaired. To investigate this question, we recruited 22 adductor laryngeal dystonia patients (15 female, age range = 28.83-72.46 years) and 18 controls (eight female, age range = 27.40-71.34 years). We leveraged the fine temporal resolution of magnetoencephalography to monitor neural activity around glottal movement onset, subsequent voice onset and after the onset of pitch feedback perturbations. We examined event-related beta-band (12-30 Hz) and high-gamma-band (65-150 Hz) neural oscillations. Prior to glottal movement onset, we observed abnormal frontoparietal motor preparatory activity. After glottal movement onset, we observed abnormal activity in the somatosensory cortex persisting through voice onset. Prior to voice onset and continuing after, we also observed abnormal activity in the auditory cortex and the cerebellum. After pitch feedback perturbation onset, we observed no differences between controls and patients in their behavioural responses to the perturbation. But in patients, we did find abnormal activity in brain regions thought to be involved in the auditory feedback control of vocal pitch (premotor, motor, somatosensory and auditory cortices). Our study results confirm the abnormal processing of somatosensory feedback that has been seen in other studies. However, there were several remarkable findings in our study. First, patients have impaired vocal motor activity even before glottal movement onset, suggesting abnormal movement preparation. These results are significant because (i) they occur before movement onset, abnormalities in patients cannot be ascribed to deficits in vocal performance and (ii) they show that neural abnormalities in laryngeal dystonia are more than just abnormal responses to sensory feedback during phonation as has been hypothesized in some previous studies. Second, abnormal auditory cortical activity in patients begins even before voice onset, suggesting abnormalities in setting up auditory predictions before the arrival of auditory feedback at voice onset. Generally, activation abnormalities identified in key brain regions within the speech motor network around various phonation events not only provide temporal specificity to neuroimaging phenotypes in laryngeal dystonia but also may serve as potential therapeutic targets for neuromodulation.
Collapse
Affiliation(s)
- Hardik Kothare
- UC Berkeley-UCSF Graduate Program in Bioengineering, San Francisco, CA, USA
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sarah Schneider
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Danielle Mizuiri
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Leighton Hinkley
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Abhishek Bhutada
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Kamalini Ranasinghe
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Susanne Honma
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Coleman Garrett
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - David Klein
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Molly Naunheim
- Department of Otolaryngology—Head and Neck Surgery, Washington University School of Medicine in St Louis, St Louis, MO, USA
| | - Katherine Yung
- San Francisco Voice & Swallowing, San Francisco, CA, USA
| | - Steven Cheung
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Clark Rosen
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Mark Courey
- Department of Otolaryngology—Head and Neck Surgery, Mount Sinai Health System, New York, NY, USA
| | - Srikantan Nagarajan
- UC Berkeley-UCSF Graduate Program in Bioengineering, San Francisco, CA, USA
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - John Houde
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
17
|
Matsuzaki J, Kagitani-Shimono K, Aoki S, Hanaie R, Kato Y, Nakanishi M, Tatsumi A, Tominaga K, Yamamoto T, Nagai Y, Mohri I, Taniike M. Abnormal cortical responses elicited by audiovisual movies in patients with autism spectrum disorder with atypical sensory behavior: A magnetoencephalographic study. Brain Dev 2022; 44:81-94. [PMID: 34563417 DOI: 10.1016/j.braindev.2021.08.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 08/09/2021] [Accepted: 08/30/2021] [Indexed: 12/23/2022]
Abstract
BACKGROUND Atypical sensory behavior disrupts behavioral adaptation in children with autism spectrum disorder (ASD); however, neural correlates of sensory dysfunction using magnetoencephalography (MEG) remain unclear. METHOD We used MEG to measure the cortical activation elicited by visual (uni)/audiovisual (multisensory) movies in 46 children (7-14 years) were included in final analysis: 13 boys with atypical audiovisual behavior in ASD (AAV+), 10 without this condition, and 23 age-matched typically developing boys. RESULTS The AAV+ group demonstrated an increase in the cortical activation in the bilateral insula in response to unisensory movies and in the left occipital, right superior temporal sulcus (rSTS), and temporal regions to multisensory movies. These increased responses were correlated with severity of the sensory impairment. Increased theta-low gamma oscillations were observed in the rSTS in AAV+. CONCLUSION The findings suggest that AAV is attributed to atypical neural networks centered on the rSTS.
Collapse
Affiliation(s)
- Junko Matsuzaki
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Kuriko Kagitani-Shimono
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan.
| | - Sho Aoki
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan
| | - Ryuzo Hanaie
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yoko Kato
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan
| | - Mariko Nakanishi
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Aika Tatsumi
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Koji Tominaga
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Tomoka Yamamoto
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yukie Nagai
- International Research Center for Neurointelligence, The University of Tokyo, Tokyo, Japan
| | - Ikuko Mohri
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Masako Taniike
- Division of Developmental Neuroscience, Department of Child Development, United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| |
Collapse
|
18
|
Leaver AM, Gonzalez S, Vasavada M, Kubicki A, Jog M, Wang DJJ, Woods RP, Espinoza R, Gollan J, Parrish T, Narr KL. Modulation of Brain Networks during MR-Compatible Transcranial Direct Current Stimulation. Neuroimage 2022; 250:118874. [PMID: 35017127 PMCID: PMC9623807 DOI: 10.1016/j.neuroimage.2022.118874] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 11/16/2021] [Accepted: 01/04/2022] [Indexed: 10/19/2022] Open
Abstract
Transcranial direct current stimulation (tDCS) can influence performance on behavioral tasks and improve symptoms of brain conditions. Yet, it remains unclear precisely how tDCS affects brain function and connectivity. Here, we measured changes in functional connectivity (FC) metrics in blood-oxygenation-level-dependent (BOLD) fMRI data acquired during MR-compatible tDCS in a whole-brain analysis with corrections for false discovery rate. Volunteers (n=64) received active tDCS, sham tDCS, and rest (no stimulation), using one of three previously established electrode tDCS montages targeting left dorsolateral prefrontal cortex (DLPFC, n=37), lateral temporoparietal area (LTA, n=16), or superior temporal cortex (STC, n=11). In brain networks where simulated E field was highest in each montage, connectivity with remote nodes decreased during active tDCS. During active DLPFC-tDCS, connectivity decreased between a fronto-parietal network and subgenual ACC, while during LTA-tDCS connectivity decreased between an auditory-somatomotor network and frontal operculum. Active DLPFC-tDCS was also associated with increased connectivity within an orbitofrontal network overlapping subgenual ACC. Irrespective of montage, FC metrics increased in sensorimotor and attention regions during both active and sham tDCS, which may reflect the cognitive-perceptual demands of tDCS. Taken together, these results indicate that tDCS may have both intended and unintended effects on ongoing brain activity, stressing the importance of including sham, stimulation-absent, and active comparators in basic science and clinical trials of tDCS.
Collapse
Affiliation(s)
- Amber M Leaver
- Department of Radiology, Northwestern University, Chicago, IL, 60611; Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095.
| | - Sara Gonzalez
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095
| | - Megha Vasavada
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095
| | - Antoni Kubicki
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095
| | - Mayank Jog
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095
| | - Danny J J Wang
- Department of Neurology, University of Southern California, Los Angeles CA 90033
| | - Roger P Woods
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095; Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, 90095
| | - Randall Espinoza
- Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, 90095
| | - Jacqueline Gollan
- Department of Psychiatry and Behavioral Sciences, Northwestern University, Chicago, IL, 60611
| | - Todd Parrish
- Department of Radiology, Northwestern University, Chicago, IL, 60611
| | - Katherine L Narr
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, 90095; Department of Psychiatry and Biobehavioral Sciences, University of California Los Angeles, Los Angeles, CA, 90095
| |
Collapse
|
19
|
Robotically-induced hallucination triggers subtle changes in brain network transitions. Neuroimage 2021; 248:118862. [PMID: 34971766 DOI: 10.1016/j.neuroimage.2021.118862] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 11/19/2021] [Accepted: 12/23/2021] [Indexed: 01/20/2023] Open
Abstract
The perception that someone is nearby, although nobody can be seen or heard, is called presence hallucination (PH). Being a frequent hallucination in patients with Parkinson's disease, it has been argued to be indicative of a more severe and rapidly advancing form of the disease, associated with psychosis and cognitive decline. PH may also occur in healthy individuals and has recently been experimentally induced, in a controlled manner during fMRI, using MR-compatible robotics and sensorimotor stimulation. Previous neuroimaging correlates of such robot-induced PH, based on conventional time-averaged fMRI analysis, identified altered activity in the posterior superior temporal sulcus and inferior frontal gyrus in healthy individuals. However, no link with the strength of the robot-induced PH was observed, and such activations were also associated with other sensations induced by robotic stimulation. Here we leverage recent advances in dynamic functional connectivity, which have been applied to different psychiatric conditions, to decompose fMRI data during PH-induction into a set of co-activation patterns that are tracked over time, as to characterize their occupancies, durations, and transitions. Our results reveal that, when PH is induced, the identified brain patterns significantly and selectively increase their transition probabilities towards a specific brain pattern, centred on the posterior superior temporal sulcus, angular gyrus, dorso-lateral prefrontal cortex, and middle prefrontal cortex. This change is not observed in any other control conditions, nor is it observed in association with other sensations induced by robotic stimulation. The present findings describe the neural mechanisms of PH in healthy individuals and identify a specific disruption of the dynamics of network interactions, extending previously reported network dysfunctions in psychotic patients with hallucinations to an induced robot-controlled specific hallucination in healthy individuals.
Collapse
|
20
|
Whitton S, Kim JM, Scurry AN, Otto S, Zhuang X, Cordes D, Jiang F. Multisensory temporal processing in early deaf. Neuropsychologia 2021; 163:108069. [PMID: 34715119 PMCID: PMC8653765 DOI: 10.1016/j.neuropsychologia.2021.108069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 08/01/2021] [Accepted: 10/21/2021] [Indexed: 10/20/2022]
Abstract
Navigating the world relies on understanding progressive sequences of multisensory events across time. Early deaf (ED) individuals are more precise in visual detection of space and motion than their normal hearing (NH) counterparts. However, whether ED individuals show altered multisensory temporal processing abilities is less clear. According to the connectome model, brain development depends on experience, and therefore the lack of audition may affect how the brain responds to remaining senses and how they are functionally connected. We used a temporal order judgment (TOJ) task to examine multisensory (visuotactile) temporal processing in ED and NH groups. We quantified BOLD responses and functional connectivity (FC) in both groups. ED and NH groups performed similarly for the visuotactile TOJ task. Bilateral posterior superior temporal sulcus (pSTS) BOLD responses during the TOJ task were significantly larger in the ED group than in NH. Using anatomically defined pSTS seeds, our FC analysis revealed stronger somatomotor and weaker visual regional connections in the ED group than in NH during the TOJ task. These results suggest that a lack of auditory input might alter the balance of tactile and visual area FC with pSTS when a multisensory temporal task is involved.
Collapse
Affiliation(s)
- Simon Whitton
- Department of Psychology, University of Nevada, Reno, USA.
| | - Jung Min Kim
- Department of Psychology, University of Nevada, Reno, USA
| | | | - Stephanie Otto
- Department of Psychology, University of Nevada, Reno, USA
| | - Xiaowei Zhuang
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, USA
| | - Dietmar Cordes
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, USA
| |
Collapse
|
21
|
Smith AT. Cortical visual area CSv as a cingulate motor area: a sensorimotor interface for the control of locomotion. Brain Struct Funct 2021; 226:2931-2950. [PMID: 34240236 PMCID: PMC8541968 DOI: 10.1007/s00429-021-02325-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/17/2021] [Indexed: 12/26/2022]
Abstract
The response properties, connectivity and function of the cingulate sulcus visual area (CSv) are reviewed. Cortical area CSv has been identified in both human and macaque brains. It has similar response properties and connectivity in the two species. It is situated bilaterally in the cingulate sulcus close to an established group of medial motor/premotor areas. It has strong connectivity with these areas, particularly the cingulate motor areas and the supplementary motor area, suggesting that it is involved in motor control. CSv is active during visual stimulation but only if that stimulation is indicative of self-motion. It is also active during vestibular stimulation and connectivity data suggest that it receives proprioceptive input. Connectivity with topographically organized somatosensory and motor regions strongly emphasizes the legs over the arms. Together these properties suggest that CSv provides a key interface between the sensory and motor systems in the control of locomotion. It is likely that its role involves online control and adjustment of ongoing locomotory movements, including obstacle avoidance and maintaining the intended trajectory. It is proposed that CSv is best seen as part of the cingulate motor complex. In the human case, a modification of the influential scheme of Picard and Strick (Picard and Strick, Cereb Cortex 6:342-353, 1996) is proposed to reflect this.
Collapse
Affiliation(s)
- Andrew T Smith
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK.
| |
Collapse
|
22
|
Vastano R, Costantini M, Widerstrom-Noga E. Maladaptive reorganization following SCI: The role of body representation and multisensory integration. Prog Neurobiol 2021; 208:102179. [PMID: 34600947 DOI: 10.1016/j.pneurobio.2021.102179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 09/08/2021] [Accepted: 09/24/2021] [Indexed: 10/20/2022]
Abstract
In this review we focus on maladaptive brain reorganization after spinal cord injury (SCI), including the development of neuropathic pain, and its relationship with impairments in body representation and multisensory integration. We will discuss the implications of altered sensorimotor interactions after SCI with and without neuropathic pain and possible deficits in multisensory integration and body representation. Within this framework we will examine published research findings focused on the use of bodily illusions to manipulate multisensory body representation to induce analgesic effects in heterogeneous chronic pain populations and in SCI-related neuropathic pain. We propose that the development and intensification of neuropathic pain after SCI is partly dependent on brain reorganization associated with dysfunctional multisensory integration processes and distorted body representation. We conclude this review by suggesting future research avenues that may lead to a better understanding of the complex mechanisms underlying the sense of the body after SCI, with a focus on cortical changes.
Collapse
Affiliation(s)
- Roberta Vastano
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| | - Marcello Costantini
- Department of Psychological, Health and Territorial Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy; Institute for Advanced Biomedical Technologies, ITAB, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy.
| | - Eva Widerstrom-Noga
- University of Miami, Department of Neurological Surgery, The Miami Project to Cure Paralysis, Miami, FL, USA.
| |
Collapse
|
23
|
Rezaul Karim AKM, Proulx MJ, de Sousa AA, Likova LT. Neuroplasticity and Crossmodal Connectivity in the Normal, Healthy Brain. PSYCHOLOGY & NEUROSCIENCE 2021; 14:298-334. [PMID: 36937077 PMCID: PMC10019101 DOI: 10.1037/pne0000258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objective Neuroplasticity enables the brain to establish new crossmodal connections or reorganize old connections which are essential to perceiving a multisensorial world. The intent of this review is to identify and summarize the current developments in neuroplasticity and crossmodal connectivity, and deepen understanding of how crossmodal connectivity develops in the normal, healthy brain, highlighting novel perspectives about the principles that guide this connectivity. Methods To the above end, a narrative review is carried out. The data documented in prior relevant studies in neuroscience, psychology and other related fields available in a wide range of prominent electronic databases are critically assessed, synthesized, interpreted with qualitative rather than quantitative elements, and linked together to form new propositions and hypotheses about neuroplasticity and crossmodal connectivity. Results Three major themes are identified. First, it appears that neuroplasticity operates by following eight fundamental principles and crossmodal integration operates by following three principles. Second, two different forms of crossmodal connectivity, namely direct crossmodal connectivity and indirect crossmodal connectivity, are suggested to operate in both unisensory and multisensory perception. Third, three principles possibly guide the development of crossmodal connectivity into adulthood. These are labeled as the principle of innate crossmodality, the principle of evolution-driven 'neuromodular' reorganization and the principle of multimodal experience. These principles are combined to develop a three-factor interaction model of crossmodal connectivity. Conclusions The hypothesized principles and the proposed model together advance understanding of neuroplasticity, the nature of crossmodal connectivity, and how such connectivity develops in the normal, healthy brain.
Collapse
|
24
|
Zhao B, Zhang Y, Chen A. Encoding of vestibular and optic flow cues to self-motion in the posterior superior temporal polysensory area. J Physiol 2021; 599:3937-3954. [PMID: 34192812 DOI: 10.1113/jp281913] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 06/28/2021] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Neurons in the posterior superior temporal polysensory area (STPp) showed significant directional selectivity in response to vestibular, optic flow and combined visual-vestibular stimuli. By comparison to the dorsal medial superior temporal area, the visual latency was slower in STPp but the vestibular latency was faster. Heading preferences under combined stimulation in STPp were usually dominated by visual signals. Cross-modal enhancement was observed in STPp when both vestibular and visual cues were presented together at their heading preferences. ABSTRACT Human neuroimaging data implicated that the superior temporal polysensory area (STP) might be involved in vestibular-visual interaction during heading computations, but the heading selectivity has not been examined in the macaque. Here, we investigated the convergence of optic flow and vestibular signals in macaque STP by using a virtual-reality system and found that 6.3% of STP neurons showed multisensory responses, with visual and vestibular direction preferences either congruent or opposite in roughly equal proportion. The percentage of vestibular-tuned cells (18.3%) was much smaller than that of visual-tuned cells (30.4%) in STP. The vestibular tuning strength was usually weaker than the visual condition. The visual latency was significantly slower in STPp than in the dorsal medial superior temporal area (MSTd), but the vestibular latency was significantly faster than in MSTd. During the bimodal condition, STP cells' response was dominated by visual signals, with the visual heading preference not affected by the vestibular signals but the response amplitudes modulated by vestibular signals in a subadditive way.
Collapse
Affiliation(s)
- Bin Zhao
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Yi Zhang
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| | - Aihua Chen
- Ministry of Education, Key Laboratory of Brain Functional Genomics (East China Normal University), Shanghai, 200062, China
| |
Collapse
|
25
|
Van Dyck D, Deconinck N, Aeby A, Baijot S, Coquelet N, Trotta N, Rovai A, Goldman S, Urbain C, Wens V, De Tiège X. Resting-state functional brain connectivity is related to subsequent procedural learning skills in school-aged children. Neuroimage 2021; 240:118368. [PMID: 34242786 DOI: 10.1016/j.neuroimage.2021.118368] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 06/30/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022] Open
Abstract
This magnetoencephalography (MEG) study investigates how procedural sequence learning performance is related to prior brain resting-state functional connectivity (rsFC), and to what extent sequence learning induces rapid changes in brain rsFC in school-aged children. Procedural learning was assessed in 30 typically developing children (mean age ± SD: 9.99 years ± 1.35) using a serial reaction time task (SRTT). During SRTT, participants touched as quickly and accurately as possible a stimulus sequentially or randomly appearing in one of the quadrants of a touchscreen. Band-limited power envelope correlation (brain rsFC) was applied to MEG data acquired at rest pre- and post-learning. Correlation analyses were performed between brain rsFC and sequence-specific learning or response time indices. Stronger pre-learning interhemispheric rsFC between inferior parietal and primary somatosensory/motor areas correlated with better subsequent sequence learning performance and faster visuomotor response time. Faster response time was associated with post-learning decreased rsFC within the dorsal extra-striate visual stream and increased rsFC between temporo-cerebellar regions. In school-aged children, variations in functional brain architecture at rest within the sensorimotor network account for interindividual differences in sequence learning and visuomotor performance. After learning, rapid adjustments in functional brain architecture are associated with visuomotor performance but not sequence learning skills.
Collapse
Affiliation(s)
- Dorine Van Dyck
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Department of Neurology, Hôpital Universitaire des Enfants Reine Fabiola (HUDERF), Université libre de Bruxelles (ULB), Brussels, Belgium.
| | - Nicolas Deconinck
- Department of Neurology, Hôpital Universitaire des Enfants Reine Fabiola (HUDERF), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Alec Aeby
- Department of Neurology, Hôpital Universitaire des Enfants Reine Fabiola (HUDERF), Université libre de Bruxelles (ULB), Brussels, Belgium; Neuropsychology and Functional Neuroimaging Research Unit (UR2NF), Center for Research in Cognition and Neurosciences (CRCN) and ULB Neurosciences Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Simon Baijot
- Department of Neurology, Hôpital Universitaire des Enfants Reine Fabiola (HUDERF), Université libre de Bruxelles (ULB), Brussels, Belgium; Neuropsychology and Functional Neuroimaging Research Unit (UR2NF), Center for Research in Cognition and Neurosciences (CRCN) and ULB Neurosciences Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Nicolas Coquelet
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Nicola Trotta
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Antonin Rovai
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Serge Goldman
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Charline Urbain
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Neuropsychology and Functional Neuroimaging Research Unit (UR2NF), Center for Research in Cognition and Neurosciences (CRCN) and ULB Neurosciences Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau (LCFC), ULB Neuroscience Institute (UNI), Université libre de Bruxelles (ULB), Brussels, Belgium; Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| |
Collapse
|
26
|
Zhou Q, Song P, Wang X, Lin H, Wang Y. Transcranial Magnetic Stimulation Over the Right Posterior Superior Temporal Sulcus Promotes the Feature Discrimination Processing. Front Hum Neurosci 2021; 15:663789. [PMID: 34220471 PMCID: PMC8253362 DOI: 10.3389/fnhum.2021.663789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
Attention is the dynamic process of allocating limited resources to the information that is most relevant to our goals. Accumulating studies have demonstrated the crucial role of frontal and parietal areas in attention. However, the effect of posterior superior temporal sulcus (pSTS) in attention is still unclear. To address this question, in this study, we measured transcranial magnetic stimulation (TMS)-induced event-related potentials (ERPs) to determine the extent of involvement of the right pSTS in attentional processing. We hypothesized that TMS would enhance the activation of the right pSTS during feature discrimination processing. We recruited 21 healthy subjects who performed the dual-feature delayed matching task while undergoing single-pulse sham or real TMS to the right pSTS 300 ms before the second stimulus onset. The results showed that the response time was reduced by real TMS of the pSTS as compared to sham stimulation. N270 amplitude was reduced during conflict processing, and the time-varying network analysis revealed increased connectivity between the frontal lobe and temporo-parietal and occipital regions. Thus, single-pulse TMS of the right pSTS enhances feature discrimination processing and task performance by reducing N270 amplitude and increasing connections between the frontal pole and temporo-parietal and occipital regions. These findings provide evidence that the right pSTS facilitates feature discrimination by accelerating the formation of a dynamic network.
Collapse
Affiliation(s)
- Qihui Zhou
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Penghui Song
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xueming Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Hua Lin
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Yuping Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China.,Collaborative Innovation Center for Brain Disorders, Institute of Sleep and Consciousness Disorders, Beijing Institute of Brain Disorders, Capital Medical University, Beijing, China.,Beijing Key Laboratory of Neuromodulation, Beijing Municipal Science and Technology Commission, Beijing, China
| |
Collapse
|
27
|
Pitzalis S, Hadj-Bouziane F, Dal Bò G, Guedj C, Strappini F, Meunier M, Farnè A, Fattori P, Galletti C. Optic flow selectivity in the macaque parieto-occipital sulcus. Brain Struct Funct 2021; 226:2911-2930. [PMID: 34043075 DOI: 10.1007/s00429-021-02293-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 05/08/2021] [Indexed: 01/16/2023]
Abstract
In humans, several neuroimaging studies have demonstrated that passive viewing of optic flow stimuli activates higher-level motion areas, like V6 and the cingulate sulcus visual area (CSv). In macaque, there are few studies on the sensitivity of V6 and CSv to egomotion compatible optic flow. The only fMRI study on this issue revealed selectivity to egomotion compatible optic flow in macaque CSv but not in V6 (Cotterau et al. Cereb Cortex 27(1):330-343, 2017, but see Fan et al. J Neurosci. 35:16303-16314, 2015). Yet, it is unknown whether monkey visual motion areas MT + and V6 display any distinctive fMRI functional profile relative to the optic flow stimulation, as it is the case for the homologous human areas (Pitzalis et al., Cereb Cortex 20(2):411-424, 2010). Here, we described the sensitivity of the monkey brain to two motion stimuli (radial rings and flow fields) originally used in humans to functionally map the motion middle temporal area MT + (Tootell et al. J Neurosci 15: 3215-3230, 1995a; Nature 375:139-141, 1995b) and the motion medial parietal area V6 (Pitzalis et al. 2010), respectively. In both animals, we found regions responding only to optic flow or radial rings stimulation, and regions responding to both stimuli. A region in the parieto-occipital sulcus (likely including V6) was one of the most highly selective area for coherently moving fields of dots, further demonstrating the power of this type of stimulation to activate V6 in both humans and monkeys. We did not find any evidence that putative macaque CSv responds to Flow Fields.
Collapse
Affiliation(s)
- Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Fadila Hadj-Bouziane
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Giulia Dal Bò
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Carole Guedj
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | | | - Martine Meunier
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France.,University of Lyon 1, Lyon, France
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| |
Collapse
|
28
|
Meijer LL, Ruis C, van der Smagt MJ, Scherder EJA, Dijkerman HC. Neural basis of affective touch and pain: A novel model suggests possible targets for pain amelioration. J Neuropsychol 2021; 16:38-53. [PMID: 33979481 PMCID: PMC9290016 DOI: 10.1111/jnp.12250] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 04/09/2021] [Indexed: 01/03/2023]
Abstract
Pain is one of the most common health problems and has a severe impact on quality of life. Yet, a suitable and efficient treatment is still not available for all patient populations suffering from pain. Interestingly, recent research shows that low threshold mechanosensory C‐tactile (CT) fibres have a modulatory influence on pain. CT‐fibres are activated by slow gentle stroking of the hairy skin, providing a pleasant sensation. Consequently, slow gentle stroking is known as affective touch. Currently, a clear overview of the way affective touch modulates pain, at a neural level, is missing. This review aims to present such an overview. To explain the interaction between affective touch and pain, first the neural basis of the affective touch system and the neural processing of pain will be described. To clarify these systems, a schematic illustration will be provided in every section. Hereafter, a novel model of interactions between affective touch and pain systems will be introduced. Finally, since affective touch might be suitable as a new treatment for chronic pain, possible clinical implications will be discussed.
Collapse
Affiliation(s)
| | - Carla Ruis
- Utrecht University, The Netherlands.,University Medical Centre Utrecht, The Netherlands
| | | | | | | |
Collapse
|
29
|
Beauchamp MS. Face and voice perception: Monkey see, monkey hear. Curr Biol 2021; 31:R435-R437. [PMID: 33974868 DOI: 10.1016/j.cub.2021.02.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Primate brains contain specialized areas for perceiving social cues. New research shows that only some of these areas integrate visual faces with auditory voices.
Collapse
Affiliation(s)
- Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
30
|
Overlapping but distinct: Distal connectivity dissociates hand and tool processing networks. Cortex 2021; 140:1-13. [PMID: 33901719 DOI: 10.1016/j.cortex.2021.03.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 01/18/2021] [Accepted: 03/04/2021] [Indexed: 12/31/2022]
Abstract
The processes and organizational principles of information involved in object recognition have been a subject of intense debate. These research efforts led to the understanding that local computations and feedforward/feedback connections are essential to our representations and their organization. Recent data, however, has demonstrated that distal computations also play a role in how information is locally processed. Here we focus on how long-range connectivity and local functional organization of information are related, by exploring regions that show overlapping category-preferences for two categories and testing whether their connections are related with distal representations in a category-specific way. We used an approach that relates functional connectivity with distal areas to local voxel-wise category-preferences. Specifically, we focused on two areas that show an overlap in category-preferences for tools and hands-the inferior parietal lobule/anterior intraparietal sulcus (IPL/aIPS) and the posterior middle temporal gyrus/lateral occipital temporal cortex (pMTG/LOTC) - and how connectivity from these two areas relate to voxel-wise category-preferences in two ventral temporal regions dedicated to the processing of tools and hands separately-the left medial fusiform gyrus and the fusiform body area respectively-as well as across the brain. We show that the functional connections of the two overlap areas correlate with categorical preferences for each category independently. These results show that regions that process both tools and hands maintain object topography in a category-specific way. This potentially allows for a category-specific flow of information that is pertinent to computing object representations.
Collapse
|
31
|
Xiao K, Gao Y, Imran SA, Chowdhury S, Commuri S, Jiang F. Cross-modal motion aftereffects transfer between vision and touch in early deaf adults. Sci Rep 2021; 11:4395. [PMID: 33623083 PMCID: PMC7902672 DOI: 10.1038/s41598-021-83960-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 12/29/2020] [Indexed: 11/23/2022] Open
Abstract
Previous research on early deafness has primarily focused on the behavioral and neural changes in the intact visual and tactile modalities. However, how early deafness changes the interplay of these two modalities is not well understood. In the current study, we investigated the effect of auditory deprivation on visuo-tactile interaction by measuring the cross-modal motion aftereffect. Consistent with previous findings, motion aftereffect transferred between vision and touch in a bidirectional manner in hearing participants. However, for deaf participants, the cross-modal transfer occurred only in the tactile-to-visual direction but not in the visual-to-tactile direction. This unidirectional cross-modal motion aftereffect found in the deaf participants could not be explained by unisensory motion aftereffect or discrimination threshold. The results suggest a reduced visual influence on tactile motion perception in early deaf individuals.
Collapse
Affiliation(s)
- Kunchen Xiao
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, 610066, Sichuan Province, China.
- Department of Psychology, University of Nevada, Reno, NV, 89557-0296, USA.
| | - Yi Gao
- Department of Psychology, University of Nevada, Reno, NV, 89557-0296, USA
| | - Syed Asif Imran
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV, 89557-0260, USA
| | - Shahida Chowdhury
- Department of Psychology, University of Nevada, Reno, NV, 89557-0296, USA
| | - Sesh Commuri
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV, 89557-0260, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, NV, 89557-0296, USA.
| |
Collapse
|
32
|
Dzięgiel-Fivet G, Plewko J, Szczerbiński M, Marchewka A, Szwed M, Jednoróg K. Neural network for Braille reading and the speech-reading convergence in the blind: Similarities and differences to visual reading. Neuroimage 2021; 231:117851. [PMID: 33582273 DOI: 10.1016/j.neuroimage.2021.117851] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 02/04/2021] [Accepted: 02/05/2021] [Indexed: 10/22/2022] Open
Abstract
All writing systems represent units of spoken language. Studies on the neural correlates of reading in different languages show that this skill relies on access to brain areas dedicated to speech processing. Speech-reading convergence onto a common perisylvian network is therefore considered universal among different writing systems. Using fMRI, we test whether this holds true also for tactile Braille reading in the blind. The neural networks for Braille and visual reading overlapped in the left ventral occipitotemporal (vOT) cortex. Even though we showed similar perisylvian specialization for speech in both groups, blind subjects did not engage this speech system for reading. In contrast to the sighted, speech-reading convergence in the blind was absent in the perisylvian network. Instead, the blind engaged vOT not only in reading but also in speech processing. The involvement of the vOT in speech processing and its engagement in reading in the blind suggests that vOT is included in a modality independent language network in the blind, also evidenced by functional connectivity results. The analysis of individual speech-reading convergence suggests that there may be segregated neuronal populations in the vOT for speech processing and reading in the blind.
Collapse
Affiliation(s)
- Gabriela Dzięgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| | - Joanna Plewko
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Marcin Szwed
- Department of Psychology, Jagiellonian University, Cracow, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
33
|
Schäfer S, Wesslein AK, Spence C, Frings C. When self-prioritization crosses the senses: Crossmodal self-prioritization demonstrated between vision and touch. Br J Psychol 2020; 112:573-584. [PMID: 33275296 DOI: 10.1111/bjop.12483] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 11/08/2020] [Indexed: 12/21/2022]
Abstract
The investigation of self-prioritization via a simple matching paradigm represents a new way of enhancing our knowledge about the processing of self-relevant content and also increases our understanding of the self-concept itself. By associating formerly neutral material with the self, and assessing the resulting prioritization of these newly formed self-associations, conclusions can be drawn concerning the effects of self-relevance without the burden of highly overlearned materials such as one's own name. This approach was used to gain further insights into the structure and complexity of self-associations: a tactile pattern was associated with the self and thereafter, the prioritization of the exact same visual pattern was assessed - enabling the investigation of crossmodal self-associations. The results demonstrate a prioritization of self-associated material that rapidly extends beyond the borders of a sensory modality in which it was first established.
Collapse
|
34
|
Processing communicative facial and vocal cues in the superior temporal sulcus. Neuroimage 2020; 221:117191. [PMID: 32711066 DOI: 10.1016/j.neuroimage.2020.117191] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 07/14/2020] [Accepted: 07/19/2020] [Indexed: 11/20/2022] Open
Abstract
Facial and vocal cues provide critical social information about other humans, including their emotional and attentional states and the content of their speech. Recent work has shown that the face-responsive region of posterior superior temporal sulcus ("fSTS") also responds strongly to vocal sounds. Here, we investigate the functional role of this region and the broader STS by measuring responses to a range of face movements, vocal sounds, and hand movements using fMRI. We find that the fSTS responds broadly to different types of audio and visual face action, including both richly social communicative actions, as well as minimally social noncommunicative actions, ruling out hypotheses of specialization for processing speech signals, or communicative signals more generally. Strikingly, however, responses to hand movements were very low, whether communicative or not, indicating a specific role in the analysis of face actions (facial and vocal), not a general role in the perception of any human action. Furthermore, spatial patterns of response in this region were able to decode communicative from noncommunicative face actions, both within and across modality (facial/vocal cues), indicating sensitivity to an abstract social dimension. These functional properties of the fSTS contrast with a region of middle STS that has a selective, largely unimodal auditory response to speech sounds over both communicative and noncommunicative vocal nonspeech sounds, and nonvocal sounds. Region of interest analyses were corroborated by a data-driven independent component analysis, identifying face-voice and auditory speech responses as dominant sources of voxelwise variance across the STS. These results suggest that the STS contains separate processing streams for the audiovisual analysis of face actions and auditory speech processing.
Collapse
|
35
|
Adversarial brain multiplex prediction from a single brain network with application to gender fingerprinting. Med Image Anal 2020; 67:101843. [PMID: 33129149 DOI: 10.1016/j.media.2020.101843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 07/25/2020] [Accepted: 09/07/2020] [Indexed: 11/22/2022]
Abstract
Brain connectivity networks, derived from magnetic resonance imaging (MRI), non-invasively quantify the relationship in function, structure, and morphology between two brain regions of interest (ROIs) and give insights into gender-related connectional differences. However, to the best of our knowledge, studies on gender differences in brain connectivity were limited to investigating pairwise (i.e., low-order) relationships across ROIs, overlooking the complex high-order interconnectedness of the brain as a network. A few recent works on neurological disorders addressed this limitation by introducing the brain multiplex which is composed of a source network intra-layer, a target intra-layer, and a convolutional interlayer capturing the high-level relationship between both intra-layers. However, brain multiplexes are built from at least two different brain networks hindering their application to connectomic datasets with single brain networks (e.g., functional networks). To fill this gap, we propose Adversarial Brain Multiplex Translator (ABMT), the first work for predicting brain multiplexes from a source network using geometric adversarial learning to investigate gender differences in the human brain. Our framework comprises: (i) a geometric source to target network translator mimicking a U-Net architecture with skip connections, (ii) a conditional discriminator which distinguishes between predicted and ground truth target intra-layers, and finally (iii) a multi-layer perceptron (MLP) classifier which supervises the prediction of the target multiplex using the subject class label (e.g., gender). Our experiments on a large dataset demonstrated that predicted multiplexes significantly boost gender classification accuracy compared with source networks and unprecedentedly identify both low and high-order gender-specific brain multiplex connections. Our ABMT source code is available on GitHub at https://github.com/basiralab/ABMT.
Collapse
|
36
|
Scurry AN, Huber E, Matera C, Jiang F. Increased Right Posterior STS Recruitment Without Enhanced Directional-Tuning During Tactile Motion Processing in Early Deaf Individuals. Front Neurosci 2020; 14:864. [PMID: 32982667 PMCID: PMC7477335 DOI: 10.3389/fnins.2020.00864] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/24/2020] [Indexed: 01/19/2023] Open
Abstract
Upon early sensory deprivation, the remaining modalities often exhibit cross-modal reorganization, such as primary auditory cortex (PAC) recruitment for visual motion processing in early deafness (ED). Previous studies of compensatory plasticity in ED individuals have given less attention to tactile motion processing. In the current study, we aimed to examine the effects of early auditory deprivation on tactile motion processing. We simulated four directions of tactile motion on each participant's right index finger and characterized their tactile motion responses and directional-tuning profiles using population receptive field analysis. Similar tactile motion responses were found within primary (SI) and secondary (SII) somatosensory cortices between ED and hearing control groups, whereas ED individuals showed a reduced proportion of voxels with directionally tuned responses in SI contralateral to stimulation. There were also significant but minimal responses to tactile motion within PAC for both groups. While early deaf individuals show significantly larger recruitment of right posterior superior temporal sulcus (pSTS) region upon tactile motion stimulation, there was no evidence of enhanced directional tuning. Greater recruitment of right pSTS region is consistent with prior studies reporting reorganization of multimodal areas due to sensory deprivation. The absence of increased directional tuning within the right pSTS region may suggest a more distributed population of neurons dedicated to processing tactile spatial information as a consequence of early auditory deprivation.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Elizabeth Huber
- Department of Speech and Hearing Sciences, Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, United States
| | - Courtney Matera
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Reno, NV, United States
| |
Collapse
|
37
|
Progin P, Faivre N, Brooks A, Chang W, Mercier M, Schwabe L, Do KQ, Blanke O. Somatosensory-visual effects in visual biological motion perception. PLoS One 2020; 15:e0234026. [PMID: 32525897 PMCID: PMC7289375 DOI: 10.1371/journal.pone.0234026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 05/19/2020] [Indexed: 11/18/2022] Open
Abstract
Social cognition is dependent on the ability to extract information from human stimuli. Of those, patterns of biological motion (BM) and in particular walking patterns of other humans, are prime examples. Although most often tested in isolation, BM outside the laboratory is often associated with multisensory cues (i.e. we often hear and see someone walking) and there is evidence that vision-based judgments of BM stimuli are systematically influenced by motor signals. Furthermore, cross-modal visuo-tactile mechanisms have been shown to influence perception of bodily stimuli. Based on these observations, we here investigated if somatosensory inputs would affect visual BM perception. In two experiments, we asked healthy participants to perform a speed discrimination task on two point light walkers (PLW) presented one after the other. In the first experiment, we quantified somatosensory-visual interactions by presenting PLW together with tactile stimuli either on the participants' forearms or feet soles. In the second experiment, we assessed the specificity of these interactions by presenting tactile stimuli either synchronously or asynchronously with upright or inverted PLW. Our results confirm that somatosensory input in the form of tactile foot stimulation influences visual BM perception. When presented with a seen walker's footsteps, additional tactile cues enhanced sensitivity on a speed discrimination task, but only if the tactile stimuli were presented on the relevant body-part (under the feet) and when the tactile stimuli were presented synchronously with the seen footsteps of the PLW, whether upright or inverted. Based on these findings we discuss potential mechanisms of somatosensory-visual interactions in BM perception.
Collapse
Affiliation(s)
- Pierre Progin
- Department of Psychiatry, Service of General Psychiatry, Lausanne University Hospital (CHUV), Lausanne, Switzerland
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology, Geneva, Switzerland
- Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology, Geneva, Switzerland
| | - Nathan Faivre
- Department of Psychiatry, Service of General Psychiatry, Lausanne University Hospital (CHUV), Lausanne, Switzerland
- Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology, Geneva, Switzerland
- CNRS, LPNC UMR 5105, Université Grenoble Alpes, Grenoble, France
| | - Anna Brooks
- Lifeline Research Foundation, Lifeline Australia, Deakin ACT, Australia
- School of Health and Human Sciences, Southern Cross University, Lismore NSW, Australia
| | - Wenwen Chang
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang, China
| | - Manuel Mercier
- Institut de Neurosciences des Systèmes (INS), Inserm (U1106), Aix Marseille University, Marseille, France
| | - Lars Schwabe
- Data Analytics, Artificial Intelligence and Blockchain, Lufthansa Industry Solutions AS, Norderstedt, Germany
| | - Kim Q. Do
- Department of Psychiatry, Center for Psychiatric Neuroscience, Lausanne University Hospital (CHUV), Lausanne, Switzerland
- National Center of Competence in Research (NCCR) "SYNAPSY—The Synaptic Bases of Mental Diseases", Lausanne, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology, Geneva, Switzerland
- Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology, Geneva, Switzerland
- National Center of Competence in Research (NCCR) "SYNAPSY—The Synaptic Bases of Mental Diseases", Lausanne, Switzerland
- Department of Neurology, University Hospital Geneva, Geneva, Switzerland
| |
Collapse
|
38
|
Crossmodal reorganisation in deafness: Mechanisms for functional preservation and functional change. Neurosci Biobehav Rev 2020; 113:227-237. [DOI: 10.1016/j.neubiorev.2020.03.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 01/29/2020] [Accepted: 03/16/2020] [Indexed: 11/23/2022]
|
39
|
Pujol J, Blanco-Hinojo L, Martínez-Vilavella G, Canu-Martín L, Pujol A, Pérez-Sola V, Deus J. Brain activity during traditional textbook and audiovisual-3D learning. Brain Behav 2019; 9:e01427. [PMID: 31571423 PMCID: PMC6790317 DOI: 10.1002/brb3.1427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 09/09/2019] [Accepted: 09/10/2019] [Indexed: 12/21/2022] Open
Abstract
INTRODUCTION Audiovisual educational tools have increasingly been used during the past years to complement and compete with traditional textbooks. However, little is known as to how the brain processes didactic information presented in different formats. We directly assessed brain activity during learning using both traditional textbook and audiovisual-3D material. METHODS A homogeneous sample of 30 young adults with active study habits was assessed. Educational material on the subject of Cardiology was adapted to be presented during the acquisition of functional MRI. RESULTS When tested after image acquisition, participants obtained similar examination scores for both formats. Evoked brain activity was robust during both traditional textbook and audiovisual-3D lessons, but a greater number of brain systems were implicated in the processing of audiovisual-3D information, consistent with its multisource sensory nature. However, learning was not associated with group mean brain activations, but was instead predicted by distinct functional MRI signal changes in the frontal lobes and showed distinct cognitive correlates. In the audiovisual-3D version, examination scores were positively correlated with late-evoked prefrontal cortex activity and working memory, and negatively correlated with language-related frontal areas and verbal memory. As for the traditional textbook version, the fewer results obtained suggested the opposite pattern, with examination scores negatively correlating with prefrontal cortex activity evoked during the lesson. CONCLUSIONS Overall, the results indicate that a similar level of knowledge may be achieved via different cognitive strategies. In our experiment, audiovisual learning appeared to benefit from prefrontal executive resources (as opposed to memorizing verbal information) more than traditional textbook learning.
Collapse
Affiliation(s)
- Jesus Pujol
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain.,Centro Investigación Biomédica en Red de Salud Mental, CIBERSAM G21, Barcelona, Spain
| | - Laura Blanco-Hinojo
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain.,Centro Investigación Biomédica en Red de Salud Mental, CIBERSAM G21, Barcelona, Spain
| | | | - Lucila Canu-Martín
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain
| | - Anna Pujol
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain
| | - Víctor Pérez-Sola
- Centro Investigación Biomédica en Red de Salud Mental, CIBERSAM G21, Barcelona, Spain.,Institute of Neuropsychiatry and Addictions, Hospital del Mar, IMIM, Barcelona, Spain
| | - Joan Deus
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain.,Department of Psychobiology and Methodology in Health Sciences, Autonomous University of Barcelona, Barcelona, Spain
| |
Collapse
|
40
|
Borra E, Luppino G. Large-scale temporo–parieto–frontal networks for motor and cognitive motor functions in the primate brain. Cortex 2019; 118:19-37. [DOI: 10.1016/j.cortex.2018.09.024] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 09/21/2018] [Accepted: 09/28/2018] [Indexed: 10/28/2022]
|
41
|
Retter TL, Webster MA, Jiang F. Directional Visual Motion Is Represented in the Auditory and Association Cortices of Early Deaf Individuals. J Cogn Neurosci 2019; 31:1126-1140. [PMID: 30726181 PMCID: PMC6599583 DOI: 10.1162/jocn_a_01378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Individuals who are deaf since early life may show enhanced performance at some visual tasks, including discrimination of directional motion. The neural substrates of such behavioral enhancements remain difficult to identify in humans, although neural plasticity has been shown for early deaf people in the auditory and association cortices, including the primary auditory cortex (PAC) and STS region, respectively. Here, we investigated whether neural responses in auditory and association cortices of early deaf individuals are reorganized to be sensitive to directional visual motion. To capture direction-selective responses, we recorded fMRI responses frequency-tagged to the 0.1-Hz presentation of central directional (100% coherent random dot) motion persisting for 2 sec contrasted with nondirectional (0% coherent) motion for 8 sec. We found direction-selective responses in the STS region in both deaf and hearing participants, but the extent of activation in the right STS region was 5.5 times larger for deaf participants. Minimal but significant direction-selective responses were also found in the PAC of deaf participants, both at the group level and in five of six individuals. In response to stimuli presented separately in the right and left visual fields, the relative activation across the right and left hemispheres was similar in both the PAC and STS region of deaf participants. Notably, the enhanced right-hemisphere activation could support the right visual field advantage reported previously in behavioral studies. Taken together, these results show that the reorganized auditory cortices of early deaf individuals are sensitive to directional motion. Speculatively, these results suggest that auditory and association regions can be remapped to support enhanced visual performance.
Collapse
|
42
|
Pramudya RC, Seo HS. Hand-Feel Touch Cues and Their Influences on Consumer Perception and Behavior with Respect to Food Products: A Review. Foods 2019; 8:foods8070259. [PMID: 31311188 PMCID: PMC6678767 DOI: 10.3390/foods8070259] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 07/05/2019] [Accepted: 07/09/2019] [Indexed: 12/12/2022] Open
Abstract
There has been a great deal of research investigating intrinsic/extrinsic cues and their influences on consumer perception and purchasing decisions at points of sale, product usage, and consumption. Consumers create expectations toward a food product through sensory information extracted from its surface (intrinsic cues) or packaging (extrinsic cues) at retail stores. Packaging is one of the important extrinsic cues that can modulate consumer perception, liking, and decision making of a product. For example, handling a product packaging during consumption, even just touching the packaging while opening or holding it during consumption, may result in a consumer expectation of the package content. Although hand-feel touch cues are an integral part of the food consumption experience, as can be observed in such an instance, little has been known about their influences on consumer perception, acceptability, and purchase behavior of food products. This review therefore provided a better understanding about hand-feel touch cues and their influences in the context of food and beverage experience with a focus on (1) an overview of touch as a sensory modality, (2) factors influencing hand-feel perception, (3) influences of hand-feel touch cues on the perception of other sensory modalities, and (4) the effects of hand-feel touch cues on emotional responses and purchase behavior.
Collapse
Affiliation(s)
- Ragita C Pramudya
- Department of Food Science, University of Arkansas, 2650 North Young Avenue, Fayetteville, AR 72704, USA
| | - Han-Seok Seo
- Department of Food Science, University of Arkansas, 2650 North Young Avenue, Fayetteville, AR 72704, USA.
| |
Collapse
|
43
|
Vanzella P, Balardin JB, Furucho RA, Zimeo Morais GA, Braun Janzen T, Sammler D, Sato JR. fNIRS Responses in Professional Violinists While Playing Duets: Evidence for Distinct Leader and Follower Roles at the Brain Level. Front Psychol 2019; 10:164. [PMID: 30804846 PMCID: PMC6370678 DOI: 10.3389/fpsyg.2019.00164] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 01/17/2019] [Indexed: 11/29/2022] Open
Abstract
Music played in ensembles is a naturalistic model to study joint action and leader-follower relationships. Recently, the investigation of the brain underpinnings of joint musical actions has gained attention; however, the cerebral correlates underlying the roles of leader and follower in music performance remain elusive. The present study addressed this question by simultaneously measuring the hemodynamic correlates of functional neural activity elicited during naturalistic violin duet performance using fNIRS. Findings revealed distinct patterns of functional brain activation when musicians played the Violin 2 (follower) than the Violin 1 part (leader) in duets, both compared to solo performance. More specifically, results indicated that musicians playing the Violin 2 part had greater oxy-Hb activation in temporo-parietal (p = 0.02) and somatomotor (p = 0.04) regions during the duo condition in relation to the solo. On the other hand, there were no significant differences in the activation of these areas between duo/solo conditions during the execution of the Violin 1 part (p's > 0.05). These findings suggest that ensemble cohesion during a musical performance may impose particular demands when musicians play the follower position, especially in brain areas associated with the processing of dynamic social information and motor simulation. This study is the first to use fNIRS hyperscanning technology to simultaneously measure the brain activity of two musicians during naturalistic music ensemble performance, opening new avenues for the investigation of brain correlates underlying joint musical actions with multiple subjects in a naturalistic environment.
Collapse
Affiliation(s)
- Patricia Vanzella
- Núcleo Interdisciplinar de Neurociência Aplicada, Universidade Federal do ABC, Santo André, Brazil
- Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, São Bernardo do Campo, Brazil
| | - Joana B. Balardin
- Hospital Albert Einstein, Instituto do Cérebro – Instituto Israelita de Ensino e Pesquisa Albert Einstein, São Paulo, Brazil
| | - Rogério A. Furucho
- Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, São Bernardo do Campo, Brazil
| | | | | | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - João R. Sato
- Núcleo Interdisciplinar de Neurociência Aplicada, Universidade Federal do ABC, Santo André, Brazil
- Centro de Matemática, Computação e Cognição, Universidade Federal do ABC, São Bernardo do Campo, Brazil
| |
Collapse
|
44
|
Pirazzoli L, Lloyd-Fox S, Braukmann R, Johnson MH, Gliga T. Hand or spoon? Exploring the neural basis of affective touch in 5-month-old infants. Dev Cogn Neurosci 2019; 35:28-35. [PMID: 30120030 PMCID: PMC6968963 DOI: 10.1016/j.dcn.2018.06.002] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2017] [Revised: 05/28/2018] [Accepted: 06/09/2018] [Indexed: 01/07/2023] Open
Abstract
In adults, affective touch leads to widespread activation of cortical areas including posterior Superior Temporal Sulcus (pSTS) and Inferior Frontal Gyrus (IFG). Using functional Near Infrared Spectroscopy (fNIRS), we asked whether similar areas are activated in 5-month-old infants, by comparing affective to non-affective touch. We contrasted a human touch stroke to strokes performed with a cold metallic spoon. The hypothesis that adult-like activation of cortical areas would be seen only in response to the human touch stroke was not confirmed. Similar patterns of activation were seen in both conditions. We conclude that either the posterior STS and IFG have not yet developed selective responses to affective touch, or that additional social cues are needed to be able to identify this type of touch.
Collapse
Affiliation(s)
- L Pirazzoli
- Centre for Brain and Cognitive Development, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK.
| | - S Lloyd-Fox
- Centre for Brain and Cognitive Development, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK
| | - R Braukmann
- Radboud University Medical Centre, Donders Institute for Brain, Cognition and Behaviour, Department of Cognitive Neuroscience, Nijmegen, The Netherlands; Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - M H Johnson
- Centre for Brain and Cognitive Development, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK
| | - T Gliga
- Centre for Brain and Cognitive Development, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK
| |
Collapse
|
45
|
Horii T, Nagai Y, Asada M. Modeling Development of Multimodal Emotion Perception Guided by Tactile Dominance and Perceptual Improvement. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2018.2809434] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
46
|
Brown RM, Penhune VB. Efficacy of Auditory versus Motor Learning for Skilled and Novice Performers. J Cogn Neurosci 2018; 30:1657-1682. [PMID: 30156505 DOI: 10.1162/jocn_a_01309] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Humans must learn a variety of sensorimotor skills, yet the relative contributions of sensory and motor information to skill acquisition remain unclear. Here we compare the behavioral and neural contributions of perceptual learning to that of motor learning, and we test whether these contributions depend on the expertise of the learner. Pianists and nonmusicians learned to perform novel melodies on a piano during fMRI scanning in four learning conditions: listening (auditory learning), performing without auditory feedback (motor learning), performing with auditory feedback (auditory-motor learning), or observing visual cues without performing or listening (cue-only learning). Visual cues were present in every learning condition and consisted of musical notation for pianists and spatial cues for nonmusicians. Melodies were performed from memory with no visual cues and with auditory feedback (recall) five times during learning. Pianists showed greater improvements in pitch and rhythm accuracy at recall during auditory learning compared with motor learning. Nonmusicians demonstrated greater rhythm improvements at recall during auditory learning compared with all other learning conditions. Pianists showed greater primary motor response at recall during auditory learning compared with motor learning, and response in this region during auditory learning correlated with pitch accuracy at recall and with auditory-premotor network response during auditory learning. Nonmusicians showed greater inferior parietal response during auditory compared with auditory-motor learning, and response in this region correlated with pitch accuracy at recall. Results suggest an advantage for perceptual learning compared with motor learning that is both general and expertise-dependent. This advantage is hypothesized to depend on feedforward motor control systems that can be used during learning to transform sensory information into motor production.
Collapse
|
47
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
48
|
Lu L, Zhang G, Xu J, Liu B. Semantically Congruent Sounds Facilitate the Decoding of Degraded Images. Neuroscience 2018; 377:12-25. [PMID: 29408368 DOI: 10.1016/j.neuroscience.2018.01.051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Revised: 01/20/2018] [Accepted: 01/23/2018] [Indexed: 11/19/2022]
Abstract
Semantically congruent sounds can facilitate perception of visual objects in the human brain. However, the manner in which semantically congruent sounds affect cognitive processing for degraded visual stimuli remains unclear. We presented participants with naturalistic degraded images and semantically congruent sounds from different conceptual categories in three modalities: degraded visual only, auditory only, and auditory and degraded visual. Functional magnetic resonance imaging was performed to assess variations in brain-activation spatial patterns. In order to account for the facilitation of auditory modulation at different levels, four conceptual categories of stimuli were divided into coarse and fine groups. Conjunction analysis and multivariate pattern analysis were used to investigate integrative properties. Superadditive interactions were found in the visual association cortex and subadditive interactions were observed in the superior temporal sulcus/superior temporal gyrus (STS/STG). Our results demonstrate that the visual association cortex and STS/STG are involved in the integration of auditory and degraded visual information. In addition, the pattern classification results imply that semantically congruent sounds may facilitate identification of degraded images in both coarse and fine groups. Importantly, when naturalistic visual stimuli were further subdivided, facilitation through auditory modulation exhibited category selectivity.
Collapse
Affiliation(s)
- Lu Lu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Gaoyan Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China; State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, PR China.
| |
Collapse
|
49
|
Huang R, Chen C, Sereno MI. Spatiotemporal integration of looming visual and tactile stimuli near the face. Hum Brain Mapp 2018; 39:2156-2176. [PMID: 29411461 PMCID: PMC5895522 DOI: 10.1002/hbm.23995] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 01/10/2018] [Accepted: 01/26/2018] [Indexed: 12/27/2022] Open
Abstract
Real-world objects approaching or passing by an observer often generate visual, auditory, and tactile signals with different onsets and durations. Prompt detection and avoidance of an impending threat depend on precise binding of looming signals across modalities. Here we constructed a multisensory apparatus to study the spatiotemporal integration of looming visual and tactile stimuli near the face. In a psychophysical experiment, subjects assessed the subjective synchrony between a looming ball and an air puff delivered to the same side of the face with a varying temporal offset. Multisensory stimuli with similar onset times were perceived as completely out of sync and assessed with the lowest subjective synchrony index (SSI). Across subjects, the SSI peaked at an offset between 800 and 1,000 ms, where the multisensory stimuli were perceived as optimally in sync. In an fMRI experiment, tactile, visual, tactile-visual out-of-sync (TVoS), and tactile-visual in-sync (TViS) stimuli were delivered to either side of the face in randomized events. Group-average statistical responses to different stimuli were compared within each surface-based region of interest (sROI) outlined on the cortical surface. Most sROIs showed a preference for contralateral stimuli and higher responses to multisensory than unisensory stimuli. In several bilateral sROIs, particularly the human MT+ complex and V6A, responses to spatially aligned multisensory stimuli (TVoS) were further enhanced when the stimuli were in-sync (TViS), as expressed by TVoS < TViS. This study demonstrates the perceptual and neural mechanisms of multisensory integration near the face, which has potential applications in the development of multisensory entertainment systems and media.
Collapse
Affiliation(s)
- Ruey‐Song Huang
- Institute for Neural Computation, University of California, San DiegoLa JollaCalifornia
| | - Ching‐fu Chen
- Department of Electrical and Computer EngineeringUniversity of California, San DiegoLa JollaCalifornia
| | - Martin I. Sereno
- Department of Psychology and Neuroimaging CenterSan Diego State UniversitySan DiegoCalifornia
- Experimental PsychologyUniversity College LondonLondonUK
| |
Collapse
|
50
|
Avery JA, Ingeholm JE, Wohltjen S, Collins M, Riddell CD, Gotts SJ, Kenworthy L, Wallace GL, Simmons WK, Martin A. Neural correlates of taste reactivity in autism spectrum disorder. NEUROIMAGE-CLINICAL 2018; 19:38-46. [PMID: 30035000 PMCID: PMC6051474 DOI: 10.1016/j.nicl.2018.04.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 02/22/2018] [Accepted: 04/01/2018] [Indexed: 11/02/2022]
Abstract
Selective or 'picky' eating habits are common among those with autism spectrum disorder (ASD). These behaviors are often related to aberrant sensory experience in individuals with ASD, including heightened reactivity to food taste and texture. However, very little is known about the neural mechanisms that underlie taste reactivity in ASD. In the present study, food-related neural responses were evaluated in 21 young adult and adolescent males diagnosed with ASD without intellectual disability, and 21 typically-developing (TD) controls. Taste reactivity was assessed using the Adolescent/Adult Sensory Profile, a clinical self-report measure. Functional magnetic resonance imaging was used to evaluate hemodynamic responses to sweet (vs. neutral) tastants and food pictures. Subjects also underwent resting-state functional connectivity scans.The ASD and TD individuals did not differ in their hemodynamic response to gustatory stimuli. However, the ASD subjects, but not the controls, exhibited a positive association between self-reported taste reactivity and the response to sweet tastants within the insular cortex and multiple brain regions associated with gustatory perception and reward. There was a strong interaction between diagnostic group and taste reactivity on tastant response in brain regions associated with ASD pathophysiology, including the bilateral anterior superior temporal sulcus (STS). This interaction of diagnosis and taste reactivity was also observed in the resting state functional connectivity between the anterior STS and dorsal mid-insula (i.e., gustatory cortex).These results suggest that self-reported heightened taste reactivity in ASD is associated with heightened brain responses to food-related stimuli and atypical functional connectivity of primary gustatory cortex, which may predispose these individuals to maladaptive and unhealthy patterns of selective eating behavior. Trial registration (clinicaltrials.gov identifier) NCT01031407. Registered: December 14, 2009.
Collapse
Affiliation(s)
- Jason A Avery
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States.
| | - John E Ingeholm
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States
| | - Sophie Wohltjen
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States
| | - Meghan Collins
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States
| | - Cameron D Riddell
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States
| | - Stephen J Gotts
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States
| | - Lauren Kenworthy
- Center for Autism Spectrum Disorders, Children's National Health System, Washington, DC, United States
| | - Gregory L Wallace
- Department of Speech, Language, and Hearing Sciences, The George Washington University, Washington, DC, United States
| | - W Kyle Simmons
- Laureate Institute for Brain Research, Tulsa, OK, United States; School of Community Medicine, The University of Tulsa, Tulsa, OK, United States
| | - Alex Martin
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, United States
| |
Collapse
|