1
|
Sheng Zheng Z, Xing-Long Wang K, Millan H, Lee S, Howard M, Rothbart A, Rosario E, Schnakers C. Transcranial direct stimulation over left inferior frontal gyrus improves language production and comprehension in post-stroke aphasia: A double-blind randomized controlled study. BRAIN AND LANGUAGE 2024; 257:105459. [PMID: 39241469 DOI: 10.1016/j.bandl.2024.105459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 04/26/2024] [Accepted: 08/21/2024] [Indexed: 09/09/2024]
Abstract
Transcranial direct current stimulation (tDCS) targeting Broca's area has shown promise for augmenting language production in post-stroke aphasia (PSA). However, previous research has been limited by small sample sizes and inconsistent outcomes. This study employed a double-blind, parallel, randomized, controlled design to evaluate the efficacy of anodal Broca's tDCS, paired with 20-minute speech and language therapy (SLT) focused primarily on expressive language, across 5 daily sessions in 45 chronic PSA patients. Utilizing the Western Aphasia Battery-Revised, which assesses a spectrum of linguistic abilities, we measured changes in both expressive and receptive language skills before and after intervention. The tDCS group demonstrated significant improvements over sham in aphasia quotient, auditory verbal comprehension, and spontaneous speech. Notably, tDCS improved both expressive and receptive domains, whereas sham only benefited expression. These results underscore the broader linguistic benefits of Broca's area stimulation and support the integration of tDCS with SLT to advance aphasia rehabilitation.
Collapse
Affiliation(s)
- Zhong Sheng Zheng
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA.
| | | | - Henry Millan
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA
| | - Sharon Lee
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA
| | - Melissa Howard
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA
| | - Aaron Rothbart
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA
| | - Emily Rosario
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA
| | - Caroline Schnakers
- Research Institute, Casa Colina Hospital and Centers for Healthcare, Pomona, CA, USA
| |
Collapse
|
2
|
Wang X, Lu K, He Y, Qiao X, Gao Z, Zhang Y, Hao N. Dynamic brain networks in spontaneous gestural communication. NPJ SCIENCE OF LEARNING 2024; 9:59. [PMID: 39353927 PMCID: PMC11445455 DOI: 10.1038/s41539-024-00274-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 09/22/2024] [Indexed: 10/03/2024]
Abstract
Gestures accent and illustrate our communication. Although previous studies have uncovered the positive effects of gestures on communication, little is known about the specific cognitive functions of different types of gestures, or the instantaneous multi-brain dynamics. Here we used the fNIRS-based hyperscanning technique to track the brain activity of two communicators, examining regions such as the PFC and rTPJ, which are part of the mirroring and mentalizing systems. When participants collaboratively solved open-ended realistic problems, we characterised the dynamic multi-brain states linked with specific social behaviours. Results demonstrated that gestures are associated with enhanced team performance, and different gestures serve distinct cognitive functions: interactive gestures are accompanied by better team originality and a more efficient inter-brain network, while fluid gestures correlate with individual cognitive fluency and efficient intra-brain states. These findings reveal a close association between social behaviours and multi-brain networks, providing a new way to explore the brain-behaviour relationship.
Collapse
Affiliation(s)
- Xinyue Wang
- School of Psychology, Nanjing Normal University, Nanjing, Jiangsu, China
| | - Kelong Lu
- School of Mental Health, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Yingyao He
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Xinuo Qiao
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Zhenni Gao
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Yu Zhang
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Ning Hao
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.
- Key Laboratory of Philosophy and Social Science of Anhui Province on Adolescent Mental Health and Crisis Intelligence Intervention, Hefei Normal University, Hefei, China.
| |
Collapse
|
3
|
Zhang Y, Wu P, Xie S, Hou Y, Wu H, Shi H. The neural mechanism of communication between graduate students and advisers in different adviser-advisee relationships. Sci Rep 2024; 14:11741. [PMID: 38778035 PMCID: PMC11111769 DOI: 10.1038/s41598-024-58308-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 03/27/2024] [Indexed: 05/25/2024] Open
Abstract
Communication is crucial in constructing the relationship between students and advisers, ultimately bridging interpersonal interactions. Only a few studies however explore the communication between postgraduate students and advisers. To fill the gaps in the empirical researches, this study uses functional near-infrared spectroscopy (FNIRS) techniques to explore the neurophysiology differences in brain activation of postgraduates with different adviser-advise relationships during simulated communication with their advisers. Results showed significant differences in the activation of the prefrontal cortex between high-quality and the low-quality students during simulating and when communicating with advisers, specifically in the Broca's areas, the frontal pole, and the orbitofrontal and dorsolateral prefrontal cortices. This further elucidated the complex cognitive process of communication between graduate students and advisers.
Collapse
Affiliation(s)
- Yan Zhang
- School of Education, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- Research Center for Innovative Education and Critical Thinking, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Peipei Wu
- School of Education, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
| | - Simiao Xie
- School of Education, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- Mental Health Education Center, Jinan University, Guangzhou, 510631, Guangdong, China
| | - Yan Hou
- School of Education, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- Mental Health Education Center, Hubei University for Nationalities, Enshi, 450004, Hubei, China
| | - Huifen Wu
- School of Education, Hubei Engineering University, Xiaogan, 432100, Hubei, China.
| | - Hui Shi
- Department of Clinical Psychology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, 100020, China.
| |
Collapse
|
4
|
Chen Y, Wang S, Yang L, Liu Y, Fu X, Wang Y, Zhang X, Wang S. Features of the speech processing network in post- and prelingually deaf cochlear implant users. Cereb Cortex 2024; 34:bhad417. [PMID: 38163443 DOI: 10.1093/cercor/bhad417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/13/2023] [Accepted: 10/14/2023] [Indexed: 01/03/2024] Open
Abstract
The onset of hearing loss can lead to altered brain structure and functions. However, hearing restoration may also result in distinct cortical reorganization. A differential pattern of functional remodeling was observed between post- and prelingual cochlear implant users, but it remains unclear how these speech processing networks are reorganized after cochlear implantation. To explore the impact of language acquisition and hearing restoration on speech perception in cochlear implant users, we conducted assessments of brain activation, functional connectivity, and graph theory-based analysis using functional near-infrared spectroscopy. We examined the effects of speech-in-noise stimuli on three groups: postlingual cochlear implant users (n = 12), prelingual cochlear implant users (n = 10), and age-matched individuals with hearing controls (HC) (n = 22). The activation of auditory-related areas in cochlear implant users showed a lower response compared with the HC group. Wernicke's area and Broca's area demonstrated differences network attributes in speech processing networks in post- and prelingual cochlear implant users. In addition, cochlear implant users maintain a high efficiency of the speech processing network to process speech information. Taken together, our results characterize the speech processing networks, in varying noise environments, in post- and prelingual cochlear implant users and provide new insights for theories of how implantation modes impact remodeling of the speech processing functional networks.
Collapse
Affiliation(s)
- Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Liu Yang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Yi Liu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| |
Collapse
|
5
|
Chui K, Ng CT, Chang TT. The visuo-sensorimotor substrate of co-speech gesture processing. Neuropsychologia 2023; 190:108697. [PMID: 37827428 DOI: 10.1016/j.neuropsychologia.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 10/14/2023]
Abstract
Co-speech gestures are integral to human communication and exhibit diverse forms, each serving a distinct communication function. However, existing literature has focused on individual gesture types, leaving a gap in understanding the comparative neural processing of these diverse forms. To address this, our study investigated the neural processing of two types of iconic gestures: those representing attributes or event knowledge of entity concepts, beat gestures enacting rhythmic manual movements without semantic information, and self-adaptors. During functional magnetic resonance imaging, systematic randomization and attentive observation of video stimuli revealed a general neural substrate for co-speech gesture processing primarily in the bilateral middle temporal and inferior parietal cortices, characterizing visuospatial attention, semantic integration of cross-modal information, and multisensory processing of manual and audiovisual inputs. Specific types of gestures and grooming movements elicited distinct neural responses. Greater activity in the right supramarginal and inferior frontal regions was specific to self-adaptors, and is relevant to the spatiomotor and integrative processing of speech and gestures. The semantic and sensorimotor regions were least active for beat gestures. The processing of attribute gestures was most pronounced in the left posterior middle temporal gyrus upon access to knowledge of entity concepts. This fMRI study illuminated the neural underpinnings of gesture-speech integration and highlighted the differential processing pathways for various co-speech gestures.
Collapse
Affiliation(s)
- Kawai Chui
- Department of English, National Chengchi University, Taipei, Taiwan; Research Centre for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan
| | - Chan-Tat Ng
- Department of Psychology, National Chengchi University, Taipei, Taiwan
| | - Ting-Ting Chang
- Research Centre for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan; Department of Psychology, National Chengchi University, Taipei, Taiwan.
| |
Collapse
|
6
|
Asalıoğlu EN, Göksun T. The role of hand gestures in emotion communication: Do type and size of gestures matter? PSYCHOLOGICAL RESEARCH 2023; 87:1880-1898. [PMID: 36436110 DOI: 10.1007/s00426-022-01774-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/17/2022] [Indexed: 11/28/2022]
Abstract
We communicate emotions in a multimodal way, yet non-verbal emotion communication is a relatively understudied area of research. In three experiments, we investigated the role of gesture characteristics (e.g., type, size in space) on individuals' processing of emotional content. In Experiment 1, participants were asked to rate the emotional intensity of emotional narratives from the videoclips either with iconic or beat gestures. Participants in the iconic gesture condition rated the emotional intensity higher than participants in the beat gesture condition. In Experiment 2, the size of gestures and its interaction with gesture type were investigated in a within-subjects design. Participants again rated the emotional intensity of emotional narratives from the videoclips. Although individuals overall rated narrow gestures more emotionally intense than wider gestures, no effects of gesture type, or gesture size and type interaction were found. Experiment 3 was conducted to check whether findings of Experiment 2 were due to viewing gestures in all videoclips. We compared the gesture and no gesture (i.e., speech only) conditions and showed that there was not a difference between them on emotional ratings. However, we could not replicate the findings related to gesture size of Experiment 2. Overall, these findings indicate the importance of examining gesture's role in emotional contexts and that different gesture characteristics such as size of gestures can be considered in nonverbal communication.
Collapse
Affiliation(s)
- Esma Nur Asalıoğlu
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Tilbe Göksun
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey.
| |
Collapse
|
7
|
Hartmann M, Carlson E, Mavrolampados A, Burger B, Toiviainen P. Postural and Gestural Synchronization, Sequential Imitation, and Mirroring Predict Perceived Coupling of Dancing Dyads. Cogn Sci 2023; 47:e13281. [PMID: 37096347 DOI: 10.1111/cogs.13281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 03/01/2023] [Accepted: 03/23/2023] [Indexed: 04/26/2023]
Abstract
Body movement is a primary nonverbal communication channel in humans. Coordinated social behaviors, such as dancing together, encourage multifarious rhythmic and interpersonally coupled movements from which observers can extract socially and contextually relevant information. The investigation of relations between visual social perception and kinematic motor coupling is important for social cognition. Perceived coupling of dyads spontaneously dancing to pop music has been shown to be highly driven by the degree of frontal orientation between dancers. The perceptual salience of other aspects, including postural congruence, movement frequencies, time-delayed relations, and horizontal mirroring remains, however, uncertain. In a motion capture study, 90 participant dyads moved freely to 16 musical excerpts from eight musical genres, while their movements were recorded using optical motion capture. A total from 128 recordings from 8 dyads maximally facing each other were selected to generate silent 8-s animations. Three kinematic features describing simultaneous and sequential full body coupling were extracted from the dyads. In an online experiment, the animations were presented to 432 observers, who were asked to rate perceived similarity and interaction between dancers. We found dyadic kinematic coupling estimates to be higher than those obtained from surrogate estimates, providing evidence for a social dimension of entrainment in dance. Further, we observed links between perceived similarity and coupling of both slower simultaneous horizontal gestures and posture bounding volumes. Perceived interaction, on the other hand, was more related to coupling of faster simultaneous gestures and to sequential coupling. Also, dyads who were perceived as more coupled tended to mirror their pair's movements.
Collapse
Affiliation(s)
- Martin Hartmann
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| | - Emily Carlson
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| | - Anastasios Mavrolampados
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| | | | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä
- Department of Music, Art and Culture Studies, University of Jyväskylä
| |
Collapse
|
8
|
Caravaglios G, Muscoso EG, Blandino V, Di Maria G, Gangitano M, Graziano F, Guajana F, Piccoli T. EEG Resting-State Functional Networks in Amnestic Mild Cognitive Impairment. Clin EEG Neurosci 2023; 54:36-50. [PMID: 35758261 DOI: 10.1177/15500594221110036] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Background. Alzheimer's cognitive-behavioral syndrome is the result of impaired connectivity between nerve cells, due to misfolded proteins, which accumulate and disrupt specific brain networks. Electroencephalography, because of its excellent temporal resolution, is an optimal approach for assessing the communication between functionally related brain regions. Objective. To detect and compare EEG resting-state networks (RSNs) in patients with amnesic mild cognitive impairment (aMCI), and healthy elderly (HE). Methods. We recruited 125 aMCI patients and 70 healthy elderly subjects. One hundred and twenty seconds of artifact-free EEG data were selected and compared between patients with aMCI and HE. We applied standard low-resolution brain electromagnetic tomography (sLORETA)-independent component analysis (ICA) to assess resting-state networks. Each network consisted of a set of images, one for each frequency (delta, theta, alpha1/2, beta1/2). Results. The functional ICA analysis revealed 17 networks common to groups. The statistical procedure demonstrated that aMCI used some networks differently than HE. The most relevant findings were as follows. Amnesic-MCI had: i) increased delta/beta activity in the superior frontal gyrus and decreased alpha1 activity in the paracentral lobule (ie, default mode network); ii) greater delta/theta/alpha/beta in the superior frontal gyrus (i.e, attention network); iii) lower alpha in the left superior parietal lobe, as well as a lower delta/theta and beta, respectively in post-central, and in superior frontal gyrus(ie, attention network). Conclusions. Our study confirms sLORETA-ICA method is effective in detecting functional resting-state networks, as well as between-groups connectivity differences. The findings provide support to the Alzheimer's network disconnection hypothesis.
Collapse
Affiliation(s)
- G Caravaglios
- U.O.C. Neurologia, A.O. Cannizzaro per l'emergenza, Catania, Italy
| | - E G Muscoso
- U.O.C. Neurologia, A.O. Cannizzaro per l'emergenza, Catania, Italy
| | - V Blandino
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), 18998University of Palermo, Palermo, Italy
| | - G Di Maria
- U.O.C. Neurologia, A.O. Cannizzaro per l'emergenza, Catania, Italy
| | - M Gangitano
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), 18998University of Palermo, Palermo, Italy
| | - F Graziano
- U.O.C. Neurologia, A.O. Cannizzaro per l'emergenza, Catania, Italy
| | - F Guajana
- U.O.C. Neurologia, A.O. Cannizzaro per l'emergenza, Catania, Italy
| | - T Piccoli
- Department of Biomedicine, Neuroscience and Advanced Diagnostics (Bi.N.D.), 18998University of Palermo, Palermo, Italy
| |
Collapse
|
9
|
Mumford KH, Aussems S, Kita S. Encouraging pointing with the right hand, but not the left hand, gives right-handed 3-year-olds a linguistic advantage. Dev Sci 2022; 26:e13315. [PMID: 36059145 DOI: 10.1111/desc.13315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 07/03/2022] [Accepted: 08/08/2022] [Indexed: 11/30/2022]
Abstract
Previous research has shown a strong positive association between right-handed gesturing and vocabulary development. However, the causal nature of this relationship remains unclear. In the current study, we tested whether gesturing with the right hand enhances linguistic processing in the left hemisphere, which is contralateral to the right hand. We manipulated the gesture hand children used in pointing tasks to test whether it would affect their performance. In either a linguistic task (verb learning) or a non-linguistic control task (memory), 131 typically developing right-handed 3-year-olds were encouraged to use either their right hand or left hand to respond. While encouraging children to use a specific hand to indicate their responses had no effect on memory performance, encouraging children to use the right hand to respond, compared to the left hand, significantly improved their verb learning performance. This study is the first to show that manipulating the hand with which children are encouraged to gesture gives them a linguistic advantage. Language lateralization in healthy right-handed children typically involves a dominant left hemisphere. Producing right-handed gestures may therefore lead to increased activation in the left hemisphere which may, in turn, facilitate forming and accessing lexical representations. It is important to note that this study manipulated gesture handedness among right-handers and does therefore not support the practice of encouraging children to become right-handed in manual activities. RESEARCH HIGHLIGHTS: Right-handed 3-year-olds were instructed to point to indicate their answers exclusively with their right or left hand in either a memory or verb learning task. Right-handed pointing was associated with improved verb generalization performance, but not improved memory performance. Thus, gesturing with the right hand, compared to the left hand, gives right-handed 3-year-olds an advantage in a linguistic but not a non-linguistic task. Right-handed pointing might lead to increased activation in the left hemisphere and facilitate forming and accessing lexical representations.
Collapse
Affiliation(s)
| | - Suzanne Aussems
- Department of Psychology, University of Warwick, Coventry, UK
| | - Sotaro Kita
- Department of Psychology, University of Warwick, Coventry, UK
| |
Collapse
|
10
|
Skipper JI. A voice without a mouth no more: The neurobiology of language and consciousness. Neurosci Biobehav Rev 2022; 140:104772. [PMID: 35835286 DOI: 10.1016/j.neubiorev.2022.104772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 05/18/2022] [Accepted: 07/05/2022] [Indexed: 11/26/2022]
Abstract
Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.
Collapse
|
11
|
Korann V, Jacob A, Lu B, Devi P, Thonse U, Nagendra B, Maria Chacko D, Dey A, Padmanabha A, Shivakumar V, Dawn Bharath R, Kumar V, Varambally S, Venkatasubramanian G, Deshpande G, Rao NP. Effect of Intranasal Oxytocin on Resting-state Effective Connectivity in Schizophrenia. Schizophr Bull 2022; 48:1115-1124. [PMID: 35759349 PMCID: PMC9434443 DOI: 10.1093/schbul/sbac066] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVES Evidence from several lines of research suggests the critical role of neuropeptide oxytocin in social cognition and social behavior. Though a few studies have examined the effect of oxytocin on clinical symptoms of schizophrenia, the underlying neurobiological changes are underexamined. Hence, in this study, we examined the effect of oxytocin on the brain's effective connectivity in schizophrenia. METHODS 31 male patients with schizophrenia (SCZ) and 21 healthy male volunteers (HV) underwent resting functional magnetic resonance imaging scans with intra-nasal oxytocin (24 IU) and placebo administered in counterbalanced order. We conducted a whole-brain effective connectivity analysis using a multivariate vector autoregressive granger causality model. We performed a conjunction analysis to control for spurious changes and canonical correlation analysis between changes in connectivity and clinical and demographic variables. RESULTS Three connections, sourced from the left caudate survived the FDR correction threshold with the conjunction analysis; connections to the left supplementary motor area, left precentral gyrus, and left frontal inferior triangular gyrus. At baseline, SCZ patients had significantly weaker connectivity from caudate to these three regions. Oxytocin, but not placebo, significantly increased the strength of connectivity in these connections. Better cognitive insight and lower negative symptoms were associated with a greater increase in connectivity with oxytocin. CONCLUSIONS These findings provide a preliminary mechanistic understanding of the effect of oxytocin on brain connectivity in schizophrenia. The study findings provide the rationale to examine the potential utility of oxytocin for social cognitive deficits in schizophrenia.
Collapse
Affiliation(s)
| | | | - Bonian Lu
- AU MRI Research Center, Department of Electrical and Computer Engineering, Auburn University, Auburn, AL, USA
| | - Priyanka Devi
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Umesh Thonse
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Bhargavi Nagendra
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Dona Maria Chacko
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Avyarthana Dey
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Anantha Padmanabha
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Venkataram Shivakumar
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Rose Dawn Bharath
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Vijay Kumar
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Shivarama Varambally
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | - Ganesan Venkatasubramanian
- Department of Psychiatry, National Institute of Mental Health and Neurosciences, Bangalore, Karnataka, India
| | | | - Naren P Rao
- To whom correspondence should be addressed; tel: +91-80-26995879, e-mail:
| |
Collapse
|
12
|
Geary DC, Xu KM. Evolution of Self-Awareness and the Cultural Emergence of Academic and Non-academic Self-Concepts. EDUCATIONAL PSYCHOLOGY REVIEW 2022; 34:2323-2349. [PMID: 35340928 PMCID: PMC8934684 DOI: 10.1007/s10648-022-09669-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/10/2022] [Indexed: 12/30/2022]
Abstract
Schooling is ubiquitous in the modern world and academic development is now a critical aspect of preparation for adulthood. A step back in time to pre-modern societies and an examination of life in remaining traditional societies today reveals that universal formal schooling is an historically recent phenomenon. This evolutionary and historical recency has profound implications for understanding academic development, including how instructional practices modify evolved or biological primary abilities (e.g., spoken language) to create evolutionarily novel or biologically secondary academic competencies (e.g., reading). We propose the development of secondary abilities promotes the emergence of academic self-concepts that in turn are supported by evolved systems for self-awareness and self-knowledge. Unlike some forms of self-knowledge (e.g., relative physical abilities) that appear to be universal and central to many people's overall self-concept, the relative importance of academic self-concepts are expected to be dependent on explicit social and cultural supports for their valuation. These culturally contingent self-concepts are contrasted with universal social and physical self-concepts, with implications for understanding variation students' relative valuation of academic competencies and their motivations to engage in academic learning.
Collapse
Affiliation(s)
- David C. Geary
- Department of Psychological Sciences, University of Missouri, Columbia, MO 65211-2500 USA
| | - Kate M. Xu
- Faculty of Educational Sciences, Open University of the Netherlands, Heerlen, the Netherlands
| |
Collapse
|
13
|
Levitt JJ, Zhang F, Vangel M, Nestor PG, Rathi Y, Kubicki M, Shenton ME, O'Donnell LJ. The Organization of Frontostriatal Brain Wiring in Healthy Subjects Using a Novel Diffusion Imaging Fiber Cluster Analysis. Cereb Cortex 2021; 31:5308-5318. [PMID: 34180506 DOI: 10.1093/cercor/bhab159] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/11/2021] [Accepted: 05/12/2021] [Indexed: 11/14/2022] Open
Abstract
To assess normal organization of frontostriatal brain wiring, we analyzed diffusion magnetic resonance imaging (dMRI) scans in 100 young adult healthy subjects (HSs). We identified fiber clusters intersecting the frontal cortex and caudate, a core component of associative striatum, and quantified their degree of deviation from a strictly topographic pattern. Using whole brain dMRI tractography and an automated tract parcellation clustering method, we extracted 17 white matter fiber clusters per hemisphere connecting the frontal cortex and caudate. In a novel approach to quantify the geometric relationship among clusters, we measured intercluster endpoint distances between corresponding cluster pairs in the frontal cortex and caudate. We show first, the overall frontal cortex wiring pattern of the caudate deviates from a strictly topographic organization due to significantly greater convergence in regionally specific clusters; second, these significantly convergent clusters originate in subregions of ventrolateral, dorsolateral, and orbitofrontal prefrontal cortex (PFC); and, third, a similar organization in both hemispheres. Using a novel tractography method, we find PFC-caudate brain wiring in HSs deviates from a strictly topographic organization due to a regionally specific pattern of cluster convergence. We conjecture cortical subregions projecting to the caudate with greater convergence subserve functions that benefit from greater circuit integration.
Collapse
Affiliation(s)
- J J Levitt
- Department of Psychiatry, VA Boston Healthcare System, Brockton Division, Brockton MA 02301, USA.,Department of Psychiatry, Harvard Medical School, Boston, MA 02115, USA.,Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02215, USA
| | - F Zhang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - M Vangel
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - P G Nestor
- Department of Psychiatry, VA Boston Healthcare System, Brockton Division, Brockton MA 02301, USA.,Department of Psychiatry, Harvard Medical School, Boston, MA 02115, USA.,Department of Psychology, University of Massachusetts, Boston, MA 02125, USA
| | - Y Rathi
- Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02215, USA.,Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - M Kubicki
- Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02215, USA.,Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA.,Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - M E Shenton
- Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02215, USA.,Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA.,Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
| | - L J O'Donnell
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
14
|
Wang J, Chen J, Yang X, Liu L, Wu C, Lu L, Li L, Wu Y. Common Brain Substrates Underlying Auditory Speech Priming and Perceived Spatial Separation. Front Neurosci 2021; 15:664985. [PMID: 34220425 PMCID: PMC8247760 DOI: 10.3389/fnins.2021.664985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 05/10/2021] [Indexed: 11/22/2022] Open
Abstract
Under a “cocktail party” environment, listeners can utilize prior knowledge of the content and voice of the target speech [i.e., auditory speech priming (ASP)] and perceived spatial separation to improve recognition of the target speech among masking speech. Previous studies suggest that these two unmasking cues are not processed independently. However, it is unclear whether the unmasking effects of these two cues are supported by common neural bases. In the current study, we aimed to first confirm that ASP and perceived spatial separation contribute to the improvement of speech recognition interactively in a multitalker condition and further investigate whether there exist intersectant brain substrates underlying both unmasking effects, by introducing these two unmasking cues in a unified paradigm and using functional magnetic resonance imaging. The results showed that neural activations by the unmasking effects of ASP and perceived separation partly overlapped in brain areas: the left pars triangularis (TriIFG) and orbitalis of the inferior frontal gyrus, left inferior parietal lobule, left supramarginal gyrus, and bilateral putamen, all of which are involved in the sensorimotor integration and the speech production. The activations of the left TriIFG were correlated with behavioral improvements caused by ASP and perceived separation. Meanwhile, ASP and perceived separation also enhanced the functional connectivity between the left IFG and brain areas related to the suppression of distractive speech signals: the anterior cingulate cortex and the left middle frontal gyrus, respectively. Therefore, these findings suggest that the motor representation of speech is important for both the unmasking effects of ASP and perceived separation and highlight the critical role of the left IFG in these unmasking effects in “cocktail party” environments.
Collapse
Affiliation(s)
- Junxian Wang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Jing Chen
- Department of Machine Intelligence, Peking University, Beijing, China.,Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Xiaodong Yang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Lei Liu
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Chao Wu
- School of Nursing, Peking University, Beijing, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| | - Yanhong Wu
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China
| |
Collapse
|
15
|
Michaelis K, Miyakoshi M, Norato G, Medvedev AV, Turkeltaub PE. Motor engagement relates to accurate perception of phonemes and audiovisual words, but not auditory words. Commun Biol 2021; 4:108. [PMID: 33495548 PMCID: PMC7835217 DOI: 10.1038/s42003-020-01634-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 12/15/2020] [Indexed: 11/12/2022] Open
Abstract
A longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus. Michaelis et al. used extra-cranial EEG during a forced-choice identification task to investigate the role of the motor system in speech perception. Their findings suggest that left hemisphere dorsal stream motor areas are dynamically engaged during speech perception based on the properties of the stimulus.
Collapse
Affiliation(s)
- Kelly Michaelis
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA.,Human Cortical Physiology and Stroke Neurorehabilitation Section, National Institute for Neurological Disorders and Stroke (NINDS), National Institutes of Health, Bethesda, MD, USA
| | - Makoto Miyakoshi
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, San Diego, CA, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Andrei V Medvedev
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA. .,Research Division, Medstar National Rehabilitation Hospital, Washington, DC, USA.
| |
Collapse
|
16
|
Vannuscorps G, Andres M, Carneiro SP, Rombaux E, Caramazza A. Typically Efficient Lipreading without Motor Simulation. J Cogn Neurosci 2021; 33:611-621. [PMID: 33416443 DOI: 10.1162/jocn_a_01666] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
All it takes is a face-to-face conversation in a noisy environment to realize that viewing a speaker's lip movements contributes to speech comprehension. What are the processes underlying the perception and interpretation of visual speech? Brain areas that control speech production are also recruited during lipreading. This finding raises the possibility that lipreading may be supported, at least to some extent, by a covert unconscious imitation of the observed speech movements in the observer's own speech motor system-a motor simulation. However, whether, and if so to what extent, motor simulation contributes to visual speech interpretation remains unclear. In two experiments, we found that several participants with congenital facial paralysis were as good at lipreading as the control population and performed these tasks in a way that is qualitatively similar to the controls despite severely reduced or even completely absent lip motor representations. Although it remains an open question whether this conclusion generalizes to other experimental conditions and to typically developed participants, these findings considerably narrow the space of hypothesis for a role of motor simulation in lipreading. Beyond its theoretical significance in the field of speech perception, this finding also calls for a re-examination of the more general hypothesis that motor simulation underlies action perception and interpretation developed in the frameworks of motor simulation and mirror neuron hypotheses.
Collapse
|
17
|
Morett LM, Landi N, Irwin J, McPartland JC. N400 amplitude, latency, and variability reflect temporal integration of beat gesture and pitch accent during language processing. Brain Res 2020; 1747:147059. [PMID: 32818527 PMCID: PMC7493208 DOI: 10.1016/j.brainres.2020.147059] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 08/03/2020] [Accepted: 08/12/2020] [Indexed: 01/19/2023]
Abstract
This study examines how across-trial (average) and trial-by-trial (variability in) amplitude and latency of the N400 event-related potential (ERP) reflect temporal integration of pitch accent and beat gesture. Thirty native English speakers viewed videos of a talker producing sentences with beat gesture co-occurring with a pitch accented focus word (synchronous), beat gesture co-occurring with the onset of a subsequent non-focused word (asynchronous), or the absence of beat gesture (no beat). Across trials, increased amplitude and earlier latency were observed when beat gesture was temporally asynchronous with pitch accenting than when it was temporally synchronous with pitch accenting or absent. Moreover, temporal asynchrony of beat gesture relative to pitch accent increased trial-by-trial variability of N400 amplitude and latency and influenced the relationship between across-trial and trial-by-trial N400 latency. These results indicate that across-trial and trial-by-trial amplitude and latency of the N400 ERP reflect temporal integration of beat gesture and pitch accent during language comprehension, supporting extension of the integrated systems hypothesis of gesture-speech processing and neural noise theories to focus processing in typical adult populations.
Collapse
Affiliation(s)
| | - Nicole Landi
- Haskins Laboratories, University of Connecticut, United States
| | - Julia Irwin
- Haskins Laboratories, Southern Connecticut State University, United States
| | | |
Collapse
|
18
|
Graïc JM, Peruffo A, Corain L, Centelleghe C, Granato A, Zanellato E, Cozzi B. Asymmetry in the Cytoarchitecture of the Area 44 Homolog of the Brain of the Chimpanzee Pan troglodytes. Front Neuroanat 2020; 14:55. [PMID: 32973465 PMCID: PMC7471632 DOI: 10.3389/fnana.2020.00055] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 07/29/2020] [Indexed: 12/20/2022] Open
Abstract
The evolution of the brain in apes and man followed a joint pathway stemming from common ancestors 5-10 million years ago. However, although apparently sharing similar organization and neurochemical properties, association areas of the isocortex remain one of the cornerstones of what sets humans aside from other primates. Brodmann's area 44, the area of Broca, is known for its implication in speech, and thus indirectly is a key mark of human uniqueness. This latero-caudal part of the frontal lobe shows a marked functional asymmetry in humans, and takes part in other complex functions, including learning and imitation, tool use, music and contains the mirror neuron system (MNS). Since the main features in the cytoarchitecture of Broca's area remains relatively constant in hominids, including in our closest relative, the chimpanzee Pan troglodytes, investigations on the finer structure, cellular organization, connectivity and eventual asymmetry of area 44 have a direct bearing on the understanding of the neural mechanisms at the base of our language. The semi-automated image analysis technology that we employed in the current study showed that the structure of the cortical layers of the chimpanzee contains elements of asymmetry that are discussed in relation to the corresponding human areas and the putative resulting disparity of function.
Collapse
Affiliation(s)
- Jean-Marie Graïc
- Department of Comparative Biomedicine and Food Science, University of Padua, Padua, Italy
| | - Antonella Peruffo
- Department of Comparative Biomedicine and Food Science, University of Padua, Padua, Italy
| | - Livio Corain
- Department of Management and Engineering, University of Padua, Padua, Italy
| | - Cinzia Centelleghe
- Department of Comparative Biomedicine and Food Science, University of Padua, Padua, Italy
| | - Alberto Granato
- Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| | - Emanuela Zanellato
- Department of Comparative Biomedicine and Food Science, University of Padua, Padua, Italy
| | - Bruno Cozzi
- Department of Comparative Biomedicine and Food Science, University of Padua, Padua, Italy
| |
Collapse
|
19
|
Michaelis K, Erickson LC, Fama ME, Skipper-Kallal LM, Xing S, Lacey EH, Anbari Z, Norato G, Rauschecker JP, Turkeltaub PE. Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
Affiliation(s)
- Kelly Michaelis
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Laura C Erickson
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Mackenzie E Fama
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, USA
| | - Laura M Skipper-Kallal
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Shihui Xing
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Elizabeth H Lacey
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA
| | - Zainab Anbari
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Peter E Turkeltaub
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA.
| |
Collapse
|
20
|
Tomasi D, Volkow ND. Network connectivity predicts language processing in healthy adults. Hum Brain Mapp 2020; 41:3696-3708. [PMID: 32449559 PMCID: PMC7416057 DOI: 10.1002/hbm.25042] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 02/25/2020] [Accepted: 05/10/2020] [Indexed: 12/21/2022] Open
Abstract
Brain imaging has been used to predict language skills during development and neuropathology but its accuracy in predicting language performance in healthy adults has been poorly investigated. To address this shortcoming, we studied the ability to predict reading accuracy and single‐word comprehension scores from rest‐ and task‐based functional magnetic resonance imaging (fMRI) datasets of 424 healthy adults. Using connectome‐based predictive modeling, we identified functional brain networks with >400 edges that predicted language scores and were reproducible in independent data sets. To simplify these complex models we identified the overlapping edges derived from the three task‐fMRI sessions (language, working memory, and motor tasks), and found 12 edges for reading recognition and 11 edges for vocabulary comprehension that accounted for 20% of the variance of these scores, both in the training sample and in the independent sample. The overlapping edges predominantly emanated from language areas within the frontoparietal and default‐mode networks, with a strong precuneus prominence. These findings identify a small subset of edges that accounted for a significant fraction of the variance in language performance that might serve as neuromarkers for neuromodulation interventions to improve language performance or for presurgical planning to minimize language impairments.
Collapse
Affiliation(s)
- Dardo Tomasi
- National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, USA
| | - Nora D Volkow
- National Institute on Alcohol Abuse and Alcoholism, Bethesda, Maryland, USA.,National Institute on Drug Abuse, Bethesda, Maryland, USA
| |
Collapse
|
21
|
Nazli ŞB, Koçak OM, Kirkici B, Sevındık M, Kokurcan A. Investigation of the Processing of Noun and Verb Words with fMRI in Patients with Schizophrenia. ACTA ACUST UNITED AC 2020; 57:9-14. [PMID: 32110143 DOI: 10.29399/npa.23521] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Accepted: 08/15/2019] [Indexed: 11/07/2022]
Abstract
Introduction Action naming is reported to be more damaged in patients with schizophrenia than object naming. Aim of this study is to understand the cortical mechanism underlying the negative symptoms seen in patients with schizophrenia such as inactivity, restricted behavioral repertoire, by using functional MRI (fMRI) to determine whether the action origin words have a different representation in the brain regions of patients with schizophrenia and healthy individuals. Our hypothesis is that restriction in the repertoire of movement and behavior and the failure of words of "action" than words of "object" are interrelated through the same cortical mechanisms. If this hypothesis is correct, the reason for not taking action in patients with schizophrenia may be improper definition of the action (verb). Methods fMRI study was conducted with 12 patients with schizophrenia and 12 healthy individuals. fMRI recording was performed after applying positive and negative syndrome (PANSS) scale, Calgary depression scale, hand preference scale to the participants. During the sessions, "lexical decision task" is applied by showing a total of 240 words (120 words - 60 verbs (words of action) and 60 nouns (words of object) - and 120 non-words) to the subjects. Results In fMRI findings, in the group main effect, which can also be expressed as the difference of the noun and verb words in the group of schizophrenia from the noun and verb words in the healthy control group, the activation of the anterior prefrontal cortex is found to be lower in patients with schizophrenia than in healthy individuals. When the brain areas which show the difference in verb words in schizophrenia group from both noun words in schizophrenia group and noun and verb words in healthy individuals are examined, inferior frontal gyrus pars triangularis (BA45) showed more activation in patients with schizophrenia than healthy individuals, but again for the same task, inferior frontal gyrus pars opercularis (BA44) and left primary sensory area showed less activation in patients with schizophrenia than healthy individuals. There is no difference between patients with schizophrenia and healthy volunteers in terms of correctly identified words and reaction time. Conclusion Considering the lack of difference between the groups in terms of number of correctly identified words and reaction time, and BA 44's role in recognition and imitation of action and being a part of the mirror neuron system, the significant inverse correlation between PANSS negative score and BA40 can be seen as an effort to compensate for BA44 inadequate activity through BA40.
Collapse
Affiliation(s)
- Şerif Bora Nazli
- Department of Psychiatry, Ankara Gülhane Training and Research Hospital, Ankara, Turkey
| | - Orhan Murat Koçak
- Department of Psychiatry, Kırıkkale University School of Medicine, Kırıkkale, Turkey
| | - Bilal Kirkici
- Department of Foreign Languages Education, Middle East Technical University, Ankara, Turkey
| | - Muhammet Sevındık
- Department of Psychiatry, Kırıkkale University School of Medicine, Kırıkkale, Turkey
| | - Ahmet Kokurcan
- Department of Psychiatry, Dışkapı Training and Research Hospital, Ankara, Turkey
| |
Collapse
|
22
|
Li Y, Seger C, Chen Q, Mo L. Left Inferior Frontal Gyrus Integrates Multisensory Information in Category Learning. Cereb Cortex 2020; 30:4410-4423. [DOI: 10.1093/cercor/bhaa029] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 12/31/2019] [Accepted: 01/22/2020] [Indexed: 12/12/2022] Open
Abstract
Abstract
Humans are able to categorize things they encounter in the world (e.g., a cat) by integrating multisensory information from the auditory and visual modalities with ease and speed. However, how the brain learns multisensory categories remains elusive. The present study used functional magnetic resonance imaging to investigate, for the first time, the neural mechanisms underpinning multisensory information-integration (II) category learning. A sensory-modality-general network, including the left insula, right inferior frontal gyrus (IFG), supplementary motor area, left precentral gyrus, bilateral parietal cortex, and right caudate and globus pallidus, was recruited for II categorization, regardless of whether the information came from a single modality or from multiple modalities. Putamen activity was higher in correct categorization than incorrect categorization. Critically, the left IFG and left body and tail of the caudate were activated in multisensory II categorization but not in unisensory II categorization, which suggests this network plays a specific role in integrating multisensory information during category learning. The present results extend our understanding of the role of the left IFG in multisensory processing from the linguistic domain to a broader role in audiovisual learning.
Collapse
Affiliation(s)
- You Li
- School of Psychology and Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, Guangdong, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, Guangdong, China
| | - Carol Seger
- School of Psychology and Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, Guangdong, China
- Department of Psychology, Colorado State University, Fort Collins, CO 80521 USA
| | - Qi Chen
- School of Psychology and Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, Guangdong, China
| | - Lei Mo
- School of Psychology and Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, Guangdong, China
| |
Collapse
|
23
|
Salo KST, Mutanen TP, Vaalto SMI, Ilmoniemi RJ. EEG Artifact Removal in TMS Studies of Cortical Speech Areas. Brain Topogr 2020; 33:1-9. [PMID: 31290050 PMCID: PMC6943412 DOI: 10.1007/s10548-019-00724-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 07/01/2019] [Indexed: 11/30/2022]
Abstract
The combination of transcranial magnetic stimulation (TMS) and electroencephalography (EEG) is commonly applied for studying the effective connectivity of neuronal circuits. The stimulation excites neurons, and the resulting TMS-evoked potentials (TEPs) are recorded with EEG. A serious obstacle in this method is the generation of large muscle artifacts from scalp muscles, especially when frontolateral and temporoparietal, such as speech, areas are stimulated. Here, TMS-EEG data were processed with the signal-space projection and source-informed reconstruction (SSP-SIR) artifact-removal methods to suppress these artifacts. SSP-SIR suppressed muscle artifacts according to the difference in frequency contents of neuronal signals and muscle activity. The effectiveness of SSP-SIR in rejecting muscle artifacts and the degree of excessive attenuation of brain EEG signals were investigated by comparing the processed versions of the recorded TMS-EEG data with simulated data. The calculated individual lead-field matrix describing how the brain signals spread on the cortex were used as simulated data. We conclude that SSP-SIR was effective in suppressing artifacts also when frontolateral and temporoparietal cortical sites were stimulated, but it may have suppressed also the brain signals near the stimulation site. Effective connectivity originating from the speech-related areas may be studied even when speech areas are stimulated at least on the contralateral hemisphere where the signals were not suppressed that much.
Collapse
Affiliation(s)
- Karita S.-T. Salo
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, P.O. Box 12200, 00076 AALTO Espoo, Finland
- BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, P.O. Box 340, 00029 HUS Helsinki, Finland
| | - Tuomas P. Mutanen
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB UK
| | - Selja M. I. Vaalto
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, P.O. Box 12200, 00076 AALTO Espoo, Finland
- BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, P.O. Box 340, 00029 HUS Helsinki, Finland
- Department of Clinical Neurophysiology, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, P.O. Box 340, 00029 HUS Helsinki, Finland
| | - Risto J. Ilmoniemi
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, P.O. Box 12200, 00076 AALTO Espoo, Finland
- BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, P.O. Box 340, 00029 HUS Helsinki, Finland
| |
Collapse
|
24
|
Jouravlev O, Zheng D, Balewski Z, Le Arnz Pongos A, Levan Z, Goldin-Meadow S, Fedorenko E. Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia 2019; 132:107132. [PMID: 31276684 PMCID: PMC6708375 DOI: 10.1016/j.neuropsychologia.2019.107132] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 06/01/2019] [Accepted: 06/30/2019] [Indexed: 12/15/2022]
Abstract
Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Collapse
Affiliation(s)
- Olessia Jouravlev
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; Carleton University, Ottawa, ON K1S 5B6, Canada.
| | - David Zheng
- Princeton University, Princeton, NJ, 08544, USA
| | - Zuzanna Balewski
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | | | - Zena Levan
- University of Chicago, Chicago, IL, 60637, USA
| | | | - Evelina Fedorenko
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; McGovern Institute for Brain Research, Cambridge, MA, 02139, USA; Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
25
|
Debreslioska S, van de Weijer J, Gullberg M. Addressees Are Sensitive to the Presence of Gesture When Tracking a Single Referent in Discourse. Front Psychol 2019; 10:1775. [PMID: 31456709 PMCID: PMC6700288 DOI: 10.3389/fpsyg.2019.01775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 07/16/2019] [Indexed: 11/13/2022] Open
Abstract
Production studies show that anaphoric reference is bimodal. Speakers can introduce a referent in speech by also using a localizing gesture, assigning a specific locus in space to it. Referring back to that referent, speakers then often accompany a spoken anaphor with a localizing anaphoric gesture (i.e., indicating the same locus). Speakers thus create visual anaphoricity in parallel to the anaphoric process in speech. In the current perception study, we examine whether addressees are sensitive to localizing anaphoric gestures and specifically to the (mis)match between recurrent use of space and spoken anaphora. The results of two reaction time experiments show that, when a single referent is gesturally tracked, addressees are sensitive to the presence of localizing gestures, but not to their spatial congruence. Addressees thus seem to integrate gestural information when processing bimodal anaphora, but their use of locational information in gestures is not obligatory in every discourse context.
Collapse
Affiliation(s)
- Sandra Debreslioska
- Centre for Languages and Literature, Lund University, Lund, Sweden
- *Correspondence: Sandra Debreslioska,
| | - Joost van de Weijer
- Centre for Languages and Literature, Lund University, Lund, Sweden
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Marianne Gullberg
- Centre for Languages and Literature, Lund University, Lund, Sweden
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
26
|
The facilitative effect of gestures on the neural processing of semantic complexity in a continuous narrative. Neuroimage 2019; 195:38-47. [DOI: 10.1016/j.neuroimage.2019.03.054] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 03/25/2019] [Indexed: 11/19/2022] Open
|
27
|
Kamavuako EN, Sheikh UA, Gilani SO, Jamil M, Niazi IK. Classification of Overt and Covert Speech for Near-Infrared Spectroscopy-Based Brain Computer Interface. SENSORS 2018; 18:s18092989. [PMID: 30205476 PMCID: PMC6164385 DOI: 10.3390/s18092989] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 08/17/2018] [Accepted: 09/05/2018] [Indexed: 11/29/2022]
Abstract
People suffering from neuromuscular disorders such as locked-in syndrome (LIS) are left in a paralyzed state with preserved awareness and cognition. In this study, it was hypothesized that changes in local hemodynamic activity, due to the activation of Broca’s area during overt/covert speech, can be harnessed to create an intuitive Brain Computer Interface based on Near-Infrared Spectroscopy (NIRS). A 12-channel square template was used to cover inferior frontal gyrus and changes in hemoglobin concentration corresponding to six aloud (overtly) and six silently (covertly) spoken words were collected from eight healthy participants. An unsupervised feature extraction algorithm was implemented with an optimized support vector machine for classification. For all participants, when considering overt and covert classes regardless of words, classification accuracy of 92.88 ± 18.49% was achieved with oxy-hemoglobin (O2Hb) and 95.14 ± 5.39% with deoxy-hemoglobin (HHb) as a chromophore. For a six-active-class problem of overtly spoken words, 88.19 ± 7.12% accuracy was achieved for O2Hb and 78.82 ± 15.76% for HHb. Similarly, for a six-active-class classification of covertly spoken words, 79.17 ± 14.30% accuracy was achieved with O2Hb and 86.81 ± 9.90% with HHb as an absorber. These results indicate that a control paradigm based on covert speech can be reliably implemented into future Brain–Computer Interfaces (BCIs) based on NIRS.
Collapse
Affiliation(s)
- Ernest Nlandu Kamavuako
- Centre for Robotics Research, Department of Informatics, King's College London, London WC2B 4BG, UK.
| | - Usman Ayub Sheikh
- Basque Center on Cognition, Brain and Language, 20009 Donostia, Spain.
- Department of Robotics and Artificial Intelligence, National University of Sciences and Technology, Islamabad 24090, Pakistan.
| | - Syed Omer Gilani
- Department of Robotics and Artificial Intelligence, National University of Sciences and Technology, Islamabad 24090, Pakistan.
| | - Mohsin Jamil
- Department of Robotics and Artificial Intelligence, National University of Sciences and Technology, Islamabad 24090, Pakistan.
- Department of Electrical Engineering, Faculty of Engineering, Islamic University Medina, Al Jamiah 42351, Saudi Arabia.
| | - Imran Khan Niazi
- Center for Chiropractic Research, New Zealand College of Chiropractic, Auckland 1010, New Zealand.
- SMI, Department of Health Science and Technology, Aalborg University, 9100 Aalborg, Denmark.
- Health and Rehabilitation Research Institute, AUT University, Auckland 1010, New Zealand.
| |
Collapse
|
28
|
Grechuta K, Bellaster BR, Munne RE, Bernal TU, Hervas BM, Segundo RS, Verschure PFMJ. The effects of silent visuomotor cueing on word retrieval in Broca's aphasies: A pilot study. IEEE Int Conf Rehabil Robot 2018; 2017:193-199. [PMID: 28813817 DOI: 10.1109/icorr.2017.8009245] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
About a quarter of stroke patients worldwide suffer serious language disorders such as aphasias. Most common symptoms of Broca's aphasia are word naming disorders which highly impact verbal communication and the quality of life of aphasic patients. In order to recover disturbances in word retrieval, several cueing methods (i.e. phonemic and semantic) have been established to improve lexical access establishing effective language rehabilitation techniques. Based on recent evidence from action-perception theories, which postulate that neural circuits for speech perception and articulation are tightly coupled, in the present work, we propose and investigate an alternative type of cueing using silent articulation-related visual stimuli. We hypothesize that providing patients with primes in the form of silent videos showing lip motions representative of correct pronunciation of target words, will result in faster word retrieval than when no such cue is provided. To test our prediction, we realize a longitudinal clinical virtual reality-based trial with four post-stroke Broca's patients and compare the interaction times between the two conditions over the eight weeks of the therapy. Our results suggest that silent visuomotor cues indeed facilitate word retrieval and verbal execution, and might be beneficial in lexical relearning in chronic Broca's patients.
Collapse
|
29
|
Geary DC. Evolutionary perspective on sex differences in the expression of neurological diseases. Prog Neurobiol 2018; 176:33-53. [PMID: 29890214 DOI: 10.1016/j.pneurobio.2018.06.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 04/25/2018] [Accepted: 06/05/2018] [Indexed: 12/20/2022]
Abstract
Sex-specific brain and cognitive deficits emerge with malnutrition, some infectious and neurodegenerative diseases, and often with prenatal or postnatal toxin exposure. These deficits are described in disparate literatures and are generally not linked to one another. Sexual selection may provide a unifying framework that integrates our understanding of these deficits and provides direction for future studies of sex-specific vulnerabilities. Sexually selected traits are those that have evolved to facilitate competition for reproductive resources or that influence mate choices, and are often larger and more complex than other traits. Critically, malnutrition, disease, chronic social stress, and exposure to man-made toxins compromise the development and expression of sexually selected traits more strongly than that of other traits. The fundamental mechanism underlying vulnerability might be the efficiency of mitochondrial energy capture and control of oxidative stress that in turn links these traits to current advances in neuroenergetics, stress endocrinology, and toxicology. The key idea is that the elaboration of these cognitive abilities, with more underlying gray matter or more extensive inter-modular white matter connections, makes them particularly sensitive to disruptions in mitochondrial functioning and oxidative stress. A framework of human sexually selected cognitive abilities and underlying brain systems is proposed and used to organize what is currently known about sex-specific vulnerabilities.
Collapse
Affiliation(s)
- David C Geary
- Department of Psychological Sciences, Interdisciplinary Neuroscience, University of Missouri, MO, 65211-2500, Columbia, United States.
| |
Collapse
|
30
|
Zinn MA, Zinn ML, Valencia I, Jason LA, Montoya JG. Cortical hypoactivation during resting EEG suggests central nervous system pathology in patients with chronic fatigue syndrome. Biol Psychol 2018; 136:87-99. [PMID: 29802861 DOI: 10.1016/j.biopsycho.2018.05.016] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2017] [Revised: 03/29/2018] [Accepted: 05/20/2018] [Indexed: 01/22/2023]
Abstract
We investigated central fatigue in 50 patients with chronic fatigue syndrome (CFS) and 50 matched healthy controls (HC). Resting state EEG was collected from 19 scalp locations during a 3 min, eyes-closed condition. Current densities were localized using exact low-resolution electromagnetic tomography (eLORETA). The Multidimensional Fatigue Inventory (MFI-20) and the Fatigue Severity Scale (FSS) were administered to all participants. Independent t-tests and linear regression analyses were used to evaluate group differences in current densities, followed by statistical non-parametric mapping (SnPM) correction procedures. Significant differences were found in the delta (1-3 Hz) and beta-2 (19-21 Hz) frequency bands. Delta sources were found predominately in the frontal lobe, while beta-2 sources were found in the medial and superior parietal lobe. Left-lateralized, frontal delta sources were associated with a clinical reduction in motivation. The implications of abnormal cortical sources in patients with CFS are discussed.
Collapse
Affiliation(s)
- M A Zinn
- Department of Psychology, Center for Community Research, DePaul University, 990 West Fullerton Ave., Suite 3100, Chicago, IL 60614, USA
| | - M L Zinn
- Department of Psychology, Center for Community Research, DePaul University, 990 West Fullerton Ave., Suite 3100, Chicago, IL 60614, USA
| | - I Valencia
- Department of Medicine, Division of Infectious Diseases, Stanford University School of Medicine, Stanford, CA, USA
| | - L A Jason
- Department of Psychology, Center for Community Research, DePaul University, 990 West Fullerton Ave., Suite 3100, Chicago, IL 60614, USA.
| | - J G Montoya
- Department of Medicine, Division of Infectious Diseases, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
31
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
32
|
Neurolinguistic processing when the brain matures without language. Cortex 2018; 99:390-403. [PMID: 29406150 DOI: 10.1016/j.cortex.2017.12.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 10/28/2017] [Accepted: 12/14/2017] [Indexed: 11/20/2022]
Abstract
The extent to which development of the brain language system is modulated by the temporal onset of linguistic experience relative to post-natal brain maturation is unknown. This crucial question cannot be investigated with the hearing population because spoken language is ubiquitous in the environment of newborns. Deafness blocks infants' language experience in a spoken form, and in a signed form when it is absent from the environment. Using anatomically constrained magnetoencephalography, aMEG, we neuroimaged lexico-semantic processing in a deaf adult whose linguistic experience began in young adulthood. Despite using language for 30 years after initially learning it, this individual exhibited limited neural response in the perisylvian language areas to signed words during the 300-400 ms temporal window, suggesting that the brain language system requires linguistic experience during brain growth to achieve functionality. The present case study primarily exhibited neural activations in response to signed words in dorsolateral superior parietal and occipital areas bilaterally, replicating the neural patterns exhibited by two previously case studies who matured without language until early adolescence (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014). The dorsal pathway appears to assume the task of processing words when the brain matures without experiencing the form-meaning network of a language.
Collapse
|
33
|
Zettin M, Leopizzi M, Galetto V. How does language change after an intensive treatment on imitation? Neuropsychol Rehabil 2018; 29:1332-1358. [DOI: 10.1080/09602011.2017.1406861] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Marina Zettin
- Department of Psychology, Centro Puzzle, Torino, Italy
- Brain Imaging Group, University of Turin, Torino, Italy
| | | | - Valentina Galetto
- Department of Psychology, Centro Puzzle, Torino, Italy
- Brain Imaging Group, University of Turin, Torino, Italy
| |
Collapse
|
34
|
Wolf D, Rekittke LM, Mittelberg I, Klasen M, Mathiak K. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network. Front Hum Neurosci 2017; 11:573. [PMID: 29249945 PMCID: PMC5714878 DOI: 10.3389/fnhum.2017.00573] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 11/13/2017] [Indexed: 11/16/2022] Open
Abstract
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen, Aachen, Germany
| |
Collapse
|
35
|
Ross LA, Del Bene VA, Molholm S, Woo YJ, Andrade GN, Abrahams BS, Foxe JJ. Common variation in the autism risk gene CNTNAP2, brain structural connectivity and multisensory speech integration. BRAIN AND LANGUAGE 2017; 174:50-60. [PMID: 28738218 DOI: 10.1016/j.bandl.2017.07.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 04/07/2017] [Accepted: 07/11/2017] [Indexed: 06/07/2023]
Abstract
Three lines of evidence motivated this study. 1) CNTNAP2 variation is associated with autism risk and speech-language development. 2) CNTNAP2 variations are associated with differences in white matter (WM) tracts comprising the speech-language circuitry. 3) Children with autism show impairment in multisensory speech perception. Here, we asked whether an autism risk-associated CNTNAP2 single nucleotide polymorphism in neurotypical adults was associated with multisensory speech perception performance, and whether such a genotype-phenotype association was mediated through white matter tract integrity in speech-language circuitry. Risk genotype at rs7794745 was associated with decreased benefit from visual speech and lower fractional anisotropy (FA) in several WM tracts (right precentral gyrus, left anterior corona radiata, right retrolenticular internal capsule). These structural connectivity differences were found to mediate the effect of genotype on audiovisual speech perception, shedding light on possible pathogenic pathways in autism and biological sources of inter-individual variation in audiovisual speech processing in neurotypicals.
Collapse
Affiliation(s)
- Lars A Ross
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA.
| | - Victor A Del Bene
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Ferkauf Graduate School of Psychology Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Sophie Molholm
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Young Jae Woo
- Department of Genetics, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Gizely N Andrade
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - Brett S Abrahams
- Department of Genetics, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - John J Foxe
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA.
| |
Collapse
|
36
|
Smitha KA, Akhil Raja K, Arun KM, Rajesh PG, Thomas B, Kapilamoorthy TR, Kesavadas C. Resting state fMRI: A review on methods in resting state connectivity analysis and resting state networks. Neuroradiol J 2017; 30:305-317. [PMID: 28353416 DOI: 10.1177/1971400917697342] [Citation(s) in RCA: 355] [Impact Index Per Article: 50.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
The inquisitiveness about what happens in the brain has been there since the beginning of humankind. Functional magnetic resonance imaging is a prominent tool which helps in the non-invasive examination, localisation as well as lateralisation of brain functions such as language, memory, etc. In recent years, there is an apparent shift in the focus of neuroscience research to studies dealing with a brain at 'resting state'. Here the spotlight is on the intrinsic activity within the brain, in the absence of any sensory or cognitive stimulus. The analyses of functional brain connectivity in the state of rest have revealed different resting state networks, which depict specific functions and varied spatial topology. However, different statistical methods have been introduced to study resting state functional magnetic resonance imaging connectivity, yet producing consistent results. In this article, we introduce the concept of resting state functional magnetic resonance imaging in detail, then discuss three most widely used methods for analysis, describe a few of the resting state networks featuring the brain regions, associated cognitive functions and clinical applications of resting state functional magnetic resonance imaging. This review aims to highlight the utility and importance of studying resting state functional magnetic resonance imaging connectivity, underlining its complementary nature to the task-based functional magnetic resonance imaging.
Collapse
Affiliation(s)
- K A Smitha
- 1 Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Science and Technology, India
| | - K Akhil Raja
- 1 Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Science and Technology, India
| | - K M Arun
- 1 Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Science and Technology, India
| | - P G Rajesh
- 2 Department of Neurology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, India
| | - Bejoy Thomas
- 1 Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Science and Technology, India
| | - T R Kapilamoorthy
- 1 Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Science and Technology, India
| | - Chandrasekharan Kesavadas
- 1 Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Science and Technology, India
| |
Collapse
|
37
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 126] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
38
|
Peeters D, Snijders TM, Hagoort P, Özyürek A. Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia 2017; 95:21-29. [DOI: 10.1016/j.neuropsychologia.2016.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 11/25/2016] [Accepted: 12/05/2016] [Indexed: 10/20/2022]
|
39
|
Snow PJ. The Structural and Functional Organization of Cognition. Front Hum Neurosci 2016; 10:501. [PMID: 27799901 PMCID: PMC5065967 DOI: 10.3389/fnhum.2016.00501] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 09/22/2016] [Indexed: 12/13/2022] Open
Abstract
This article proposes that what have been historically and contemporarily defined as different domains of human cognition are served by one of four functionally- and structurally-distinct areas of the prefrontal cortex (PFC). Their contributions to human intelligence are as follows: (a) BA9, enables our emotional intelligence, engaging the psychosocial domain; (b) BA47, enables our practical intelligence, engaging the material domain; (c) BA46 (or BA46-9/46), enables our abstract intelligence, engaging the hypothetical domain; and (d) BA10, enables our temporal intelligence, engaging in planning within any of the other three domains. Given their unique contribution to human cognition, it is proposed that these areas be called the, social (BA9), material (BA47), abstract (BA46-9/46) and temporal (BA10) mind. The evidence that BA47 participates strongly in verbal and gestural communication suggests that language evolved primarily as a consequence of the extreme selective pressure for practicality; an observation supported by the functional connectivity between BA47 and orbital areas that negatively reinforce lying. It is further proposed that the abstract mind (BA46-9/46) is the primary seat of metacognition charged with creating adaptive behavioral strategies by generating higher-order concepts (hypotheses) from lower-order concepts originating from the other three domains of cognition.
Collapse
Affiliation(s)
- Peter J Snow
- School of Medical Science, Griffith University Gold Coast, QLD, Australia
| |
Collapse
|
40
|
Redcay E, Velnoskey KR, Rowe ML. Perceived communicative intent in gesture and language modulates the superior temporal sulcus. Hum Brain Mapp 2016; 37:3444-61. [PMID: 27238550 PMCID: PMC6867447 DOI: 10.1002/hbm.23251] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Revised: 03/25/2016] [Accepted: 04/27/2016] [Indexed: 11/08/2022] Open
Abstract
Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant-Directed Gestures (PDG) (e.g., "Hello, come here"), noncommunicative Self-adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant-Directed Sentences (PDS), matched in content to PDG, (2) Third-person Sentences (3PS), describing a character's actions from a third-person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface-based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third-person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant-Directed Sentences to Third-person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant-directed communicative intent through gesture and language. Hum Brain Mapp 37:3444-3461, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Elizabeth Redcay
- Department of PsychologyUniversity of MarylandCollege ParkMaryland
| | | | - Meredith L. Rowe
- Graduate School of EducationHarvard UniversityCambridgeMassachusetts
| |
Collapse
|
41
|
Lansing AE, Virk A, Notestine R, Plante WY, Fennema-Notestine C. Cumulative trauma, adversity and grief symptoms associated with fronto-temporal regions in life-course persistent delinquent boys. Psychiatry Res 2016; 254:92-102. [PMID: 27388804 PMCID: PMC4992608 DOI: 10.1016/j.pscychresns.2016.06.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Revised: 05/06/2016] [Accepted: 06/14/2016] [Indexed: 02/02/2023]
Abstract
Delinquent youth have substantial trauma exposure, with life-course persistent delinquents [LCPD] demonstrating notably elevated cross-diagnostic psychopathology and cognitive deficits. Because adolescents remain in the midst of brain and neurocognitive development, tailored interventions are key to improving functional outcomes. This structural magnetic resonance imaging study compared neuroanatomical profiles of 23 LCPD and 20 matched control adolescent boys. LCPD youth had smaller overall gray matter, and left hippocampal, volumes alongside less cortical surface area and folding within the left pars opercularis and supramarginal cortex. LCPD youth had more adversity-related exposures, and their higher Cumulative Trauma, Adversity and Grief [C-TAG] symptoms were associated with less surface area and folding in the pars opercularis and lingual gyrus. Neuroanatomical differences between LCPD and control youth overlap with data from both maltreatment and antisocial literatures. The affected left frontal regions also share connections to language- and executive-related functions, aligning well with LCPD youths' cognitive and behavioral difficulties. These data also dovetail with research suggesting the possibility of neurodevelopmental delays or disruptions related to cumulative adversity burden. Thus, concurrent treatment of LCPD youths' C-TAG symptoms and, cognitive deficits with overlapping neuroanatomical bases, may be most effective in improving outcomes and optimizing neurodevelopmental trajectories.
Collapse
Affiliation(s)
- Amy E Lansing
- Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA; Department of Sociology, San Diego State University, San Diego, CA, USA.
| | - Agam Virk
- Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA
| | - Randy Notestine
- Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA
| | - Wendy Y Plante
- Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA; Department of Sociology, San Diego State University, San Diego, CA, USA
| | - Christine Fennema-Notestine
- Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA; Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
42
|
Tye-Murray N, Spehar B, Myerson J, Hale S, Sommers M. Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration. Psychol Aging 2016; 31:380-9. [PMID: 27294718 PMCID: PMC4910521 DOI: 10.1037/pag0000094] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study of visual (V-only) and audiovisual (AV) speech recognition in adults aged 22-92 years, the rate of age-related decrease in V-only performance was more than twice that in AV performance. Both auditory-only (A-only) and V-only performance were significant predictors of AV speech recognition, but age did not account for additional (unique) variance. Blurring the visual speech signal decreased speech recognition, and in AV conditions involving stimuli associated with equivalent unimodal performance for each participant, speech recognition remained constant from 22 to 92 years of age. Finally, principal components analysis revealed separate visual and auditory factors, but no evidence of an AV integration factor. Taken together, these results suggest that the benefit that comes from being able to see as well as hear a talker remains constant throughout adulthood and that changes in this AV advantage are entirely driven by age-related changes in unimodal visual and auditory speech recognition. (PsycINFO Database Record
Collapse
Affiliation(s)
| | - Brent Spehar
- Washington University in St Louis School of Medicine
| | | | | | | |
Collapse
|
43
|
Biau E, Morís Fernández L, Holle H, Avila C, Soto-Faraco S. Hand gestures as visual prosody: BOLD responses to audio–visual alignment are modulated by the communicative nature of the stimuli. Neuroimage 2016; 132:129-137. [DOI: 10.1016/j.neuroimage.2016.02.018] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2014] [Revised: 12/16/2015] [Accepted: 02/09/2016] [Indexed: 11/15/2022] Open
|
44
|
Katz WF, Mehta S. Visual Feedback of Tongue Movement for Novel Speech Sound Learning. Front Hum Neurosci 2015; 9:612. [PMID: 26635571 PMCID: PMC4652268 DOI: 10.3389/fnhum.2015.00612] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2015] [Accepted: 10/26/2015] [Indexed: 01/24/2023] Open
Abstract
Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one's own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker's learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers' productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.
Collapse
Affiliation(s)
- William F Katz
- Speech Production Lab, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas Dallas, TX, USA
| | - Sonya Mehta
- Speech Production Lab, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas Dallas, TX, USA
| |
Collapse
|
45
|
Di Pastena A, Schiaratura LT, Askevis-Leherpeux F. Joindre le geste à la parole : les liens entre la parole et les gestes co-verbaux. ANNEE PSYCHOLOGIQUE 2015. [DOI: 10.3917/anpsy.153.0463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
46
|
The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies. Neurosci Biobehav Rev 2015; 57:88-104. [DOI: 10.1016/j.neubiorev.2015.08.006] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Revised: 07/13/2015] [Accepted: 08/06/2015] [Indexed: 11/18/2022]
|
47
|
Joindre le geste à la parole : les liens entre la parole et les gestes co-verbaux. ANNEE PSYCHOLOGIQUE 2015. [DOI: 10.4074/s0003503315003061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
48
|
|
49
|
|
50
|
Secora K, Emmorey K. The Action-Sentence Compatibility Effect in ASL: the role of semantics vs. perception. LANGUAGE AND COGNITION 2015; 7:305-318. [PMID: 26052352 PMCID: PMC4455545 DOI: 10.1017/langcog.2014.40] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Embodied theories of cognition propose that humans use sensorimotor systems in processing language. The Action-Sentence Compatibility Effect (ACE) refers to the finding that motor responses are facilitated after comprehending sentences that imply movement in the same direction. In sign languages there is a potential conflict between sensorimotor systems and linguistic semantics: movement away from the signer is perceived as motion toward the comprehender. We examined whether perceptual processing of sign movement or verb semantics modulate the ACE. Deaf ASL signers performed a semantic judgment task while viewing signed sentences expressing toward or away motion. We found a significant congruency effect relative to the verb's semantics rather than to the perceived motion. This result indicates that (a) the motor system is involved in the comprehension of a visual-manual language, and (b) motor simulations for sign language are modulated by verb semantics rather than by the perceived visual motion of the hands.
Collapse
Affiliation(s)
- Kristen Secora
- San Diego State University, and University of California San Diego
| | | |
Collapse
|