1
|
Johnstone KL, Blades M, Martin C. Making memories: The gestural misinformation effect in children aged 11-16-years-old with intellectual/developmental difficulties. RESEARCH IN DEVELOPMENTAL DISABILITIES 2024; 154:104828. [PMID: 39298997 DOI: 10.1016/j.ridd.2024.104828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 07/24/2024] [Accepted: 08/27/2024] [Indexed: 09/22/2024]
Abstract
BACKGROUND In 2016, global records documented around 1 billion child abuse cases, with higher rates among children with Intellectual and Developmental Disabilities (IDD), and most recorded offenses not proceeding to court. Accurate eyewitness testimony is vital for the justice system. Yet, while children with IDD are known to be influenced by verbal misinformation, the effect of gestures on their testimony is still unknown. AIMS The present study assessed the extent to which gesture can mislead children with IDD, alongside comparisons to prior research in typically developing (TD) children. METHOD A sample of children with moderate IDD aged 11-16 years (n = 21, M=12.95 years) were recruited from a UK school, and compared to TD 5-6-year-olds (n = 31, M=5.77 years) and 7-8-year-olds (n = 32, M=7.66 years) from previous published research. After watching a video participants underwent an interview containing 12 questions, some of which contained suggestive gestures. OUTCOMES AND IMPLICATIONS Results demonstrated that in children with IDD, gesture observation significantly influenced responses given, with 18 of 21 children being misled at least once. Comparisons to TD children indicated no difference in suggestibility. This study is the first to examine how leading gestural information affects children with IDD, broadening previous research to a more representative sample for the justice system. Discussion centres on implications for police interview guidelines.
Collapse
|
2
|
Cacciante L, Pregnolato G, Salvalaggio S, Federico S, Kiper P, Smania N, Turolla A. Language and gesture neural correlates: A meta-analysis of functional magnetic resonance imaging studies. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024; 59:902-912. [PMID: 37971416 DOI: 10.1111/1460-6984.12987] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Humans often use co-speech gestures to promote effective communication. Attention has been paid to the cortical areas engaged in the processing of co-speech gestures. AIMS To investigate the neural network underpinned in the processing of co-speech gestures and to observe whether there is a relationship between areas involved in language and gesture processing. METHODS & PROCEDURES We planned to include studies with neurotypical and/or stroke participants who underwent a bimodal task (i.e., processing of co-speech gestures with relative speech) and a unimodal task (i.e., speech or gesture alone) during a functional magnetic resonance imaging (fMRI) session. After a database search, abstract and full-text screening were conducted. Qualitative and quantitative data were extracted, and a meta-analysis was performed with the software GingerALE 3.0.2, performing contrast analyses of uni- and bimodal tasks. MAIN CONTRIBUTION The database search produced 1024 records. After the screening process, 27 studies were included in the review. Data from 15 studies were quantitatively analysed through meta-analysis. Meta-analysis found three clusters with a significant activation of the left middle frontal gyrus and inferior frontal gyrus, and bilateral middle occipital gyrus and inferior temporal gyrus. CONCLUSIONS There is a close link at the neural level for the semantic processing of auditory and visual information during communication. These findings encourage the integration of the use of co-speech gestures during aphasia treatment as a strategy to foster the possibility to communicate effectively for people with aphasia. WHAT THIS PAPER ADDS What is already known on this subject Gestures are an integral part of human communication, and they may have a relationship at neural level with speech processing. What this paper adds to the existing knowledge During processing of bi- and unimodal communication, areas related to semantic processing and multimodal processing are activated, suggesting that there is a close link between co-speech gestures and spoken language at a neural level. What are the potential or actual clinical implications of this work? Knowledge of the functions related to gesture and speech processing neural networks will allow for the adoption of model-based neurorehabilitation programs to foster recovery from aphasia by strengthening the specific functions of these brain networks.
Collapse
Affiliation(s)
- Luisa Cacciante
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Giorgia Pregnolato
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Silvia Salvalaggio
- Laboratory of Computational Neuroimaging, IRCCS San Camillo Hospital, Venice, Italy
- Padova Neuroscience Center, Università degli Studi di Padova, Padua, Italy
| | - Sara Federico
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Pawel Kiper
- Laboratory of Healthcare Innovation Technology, IRCCS San Camillo Hospital, Venice, Italy
| | - Nicola Smania
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Andrea Turolla
- Department of Biomedical and Neuromotor Sciences-DIBINEM, Alma Mater Studiorum Università di Bologna, Bologna, Italy
- Unit of Occupational Medicine, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Bologna, Italy
| |
Collapse
|
3
|
Baumard J, Laniepce A, Lesourd M, Guezouli L, Beaucousin V, Gehin M, Osiurak F, Bartolo A. The Neurocognitive Bases of Meaningful Intransitive Gestures: A Systematic Review and Meta-analysis of Neuropsychological Studies. Neuropsychol Rev 2024:10.1007/s11065-024-09634-6. [PMID: 38448754 DOI: 10.1007/s11065-024-09634-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/26/2024] [Indexed: 03/08/2024]
Abstract
Researchers and clinicians have long used meaningful intransitive (i.e., not tool-related; MFI) gestures to assess apraxia-a complex and frequent motor-cognitive disorder. Nevertheless, the neurocognitive bases of these gestures remain incompletely understood. Models of apraxia have assumed that meaningful intransitive gestures depend on either long-term memory (i.e., semantic memory and action lexicons) stored in the left hemisphere, or social cognition and the right hemisphere. This meta-analysis of 42 studies reports the performance of 2659 patients with either left or right hemisphere damage in tests of meaningful intransitive gestures, as compared to other gestures (i.e., MFT or meaningful transitive and MLI or meaningless intransitive) and cognitive tests. The key findings are as follows: (1) deficits of meaningful intransitive gestures are more frequent and severe after left than right hemisphere lesions, but they have been reported in both groups; (2) we found a transitivity effect in patients with lesions of the left hemisphere (i.e., meaningful transitive gestures more difficult than meaningful intransitive gestures) but a "reverse" transitivity effect in patients with lesions of the right hemisphere (i.e., meaningful transitive gestures easier than meaningful intransitive gestures); (3) there is a strong association between meaningful intransitive and transitive (but not meaningless) gestures; (4) isolated deficits of meaningful intransitive gestures are more frequent in cases with right than left hemisphere lesions; (5) these deficits may occur in the absence of language and semantic memory impairments; (6) meaningful intransitive gesture performance seems to vary according to the emotional content of gestures (i.e., body-centered gestures and emotional valence-intensity). These findings are partially consistent with the social cognition hypothesis. Methodological recommendations are given for future studies.
Collapse
Affiliation(s)
| | | | - Mathieu Lesourd
- UMR INSERM 1322 LINC, Université Bourgogne Franche-Comté, Besancon, France
| | - Léna Guezouli
- Normandie Univ, UNIROUEN, CRFDP, 76000, Rouen, France
| | | | - Maureen Gehin
- Normandie Univ, UNIROUEN, CRFDP, 76000, Rouen, France
| | - François Osiurak
- Laboratoire d'Étude des Mécanismes Cognitifs (UR 3082), Université Lyon 2, Bron, France
- Institut Universitaire de France (IUF), Paris, France
| | - Angela Bartolo
- Institut Universitaire de France (IUF), Paris, France
- CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, F-59000, Lille, France
| |
Collapse
|
4
|
García-Gámez AB, Macizo P. Gestures as Scaffolding to Learn Vocabulary in a Foreign Language. Brain Sci 2023; 13:1712. [PMID: 38137160 PMCID: PMC10741801 DOI: 10.3390/brainsci13121712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 12/04/2023] [Accepted: 12/06/2023] [Indexed: 12/24/2023] Open
Abstract
This paper investigates the influence of gestures on foreign language (FL) vocabulary learning. In this work, we first address the state of the art in the field and then delve into the research conducted in our lab (three experiments already published) in order to finally offer a unified theoretical interpretation of the role of gestures in FL vocabulary learning. In Experiments 1 and 2, we examined the impact of gestures on noun and verb learning. The results revealed that participants exhibited better learning outcomes when FL words were accompanied by congruent gestures compared to those from the no-gesture condition. Conversely, when meaningless or incongruent gestures were presented alongside new FL words, gestures had a detrimental effect on the learning process. Secondly, we addressed the question of whether or not individuals need to physically perform the gestures themselves to observe the effects of gestures on vocabulary learning (Experiment 3). Results indicated that congruent gestures improved FL word recall when learners only observed the instructor's gestures ("see" group) and when they mimicked them ("do" group). Importantly, the adverse effect associated with incongruent gestures was reduced in the "do" compared to that in the "see" experimental group. These findings suggest that iconic gestures can serve as an effective tool for learning vocabulary in an FL, particularly when the gestures align with the meaning of the words. Furthermore, the active performance of gestures helps counteract the negative effects associated with inconsistencies between gestures and word meanings. Consequently, if a choice must be made, an FL learning strategy in which learners acquire words while making gestures congruent with their meaning would be highly desirable.
Collapse
Affiliation(s)
- Ana Belén García-Gámez
- Mind, Brain and Behavior Research Center (CIMCYC), 18071 Granada, Spain;
- Departamento de Psicología Experimental, Facultad de Psicología, Campus de Cartuja, University of Granada, 18071 Granada, Spain
| | - Pedro Macizo
- Mind, Brain and Behavior Research Center (CIMCYC), 18071 Granada, Spain;
- Departamento de Psicología Experimental, Facultad de Psicología, Campus de Cartuja, University of Granada, 18071 Granada, Spain
| |
Collapse
|
5
|
Sijtsma M, Marjoram D, Gallagher HL, Grealy MA, Brennan D, Mathias C, Cavanagh J, Pollick FE. Major Depression and the Perception of Affective Instrumental and Expressive Gestures: An fMRI Investigation. Psychiatry Res Neuroimaging 2023; 336:111728. [PMID: 37939431 DOI: 10.1016/j.pscychresns.2023.111728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 09/24/2023] [Accepted: 10/10/2023] [Indexed: 11/10/2023]
Abstract
Major depressive disorder (MDD) is associated with biased perception of human movement. Gesture is important for communication and in this study we investigated neural correlates of gesture perception in MDD. We hypothesised different neural activity between individuals with MDD and typical individuals when viewing instrumental and expressive gestures that were negatively or positively valenced. Differences were expected in brain areas associated with gesture perception, including superior temporal, frontal, and emotion processing regions. We recruited 12 individuals with MDD and 12 typical controls matched on age, gender, and handedness. They viewed gestures displayed by stick figures while functional magnetic resonance imaging (fMRI) was performed. Results of a random effects three-way mixed ANOVA indicated that individuals with MDD had greater activity in the right claustrum compared to controls, regardless of gesture type or valence. Additionally, we observed main effects of gesture type and valence, regardless of group. Perceiving instrumental compared to expressive gestures was associated with greater activity in the left cuneus and left superior temporal gyrus, while perceiving negative compared to positive gestures was associated with greater activity in the right precuneus and right lingual gyrus. We also observed a two-way interaction between gesture type and valence in various brain regions.
Collapse
Affiliation(s)
- Mathilde Sijtsma
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Dominic Marjoram
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Helen L Gallagher
- School of Health and Life Sciences, Glasgow Caledonian University, Glasgow, UK
| | - Madeleine A Grealy
- Department of Psychological Science and Health, University of Strathclyde, Glasgow, UK
| | - David Brennan
- Department of MRI Physics, Imaging Centre of Excellence, Queen Elizabeth University Hospital, Glasgow, UK
| | | | - Jonathan Cavanagh
- School of Infection and Immunity, University of Glasgow, Glasgow, UK
| | - Frank E Pollick
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK.
| |
Collapse
|
6
|
Chui K, Ng CT, Chang TT. The visuo-sensorimotor substrate of co-speech gesture processing. Neuropsychologia 2023; 190:108697. [PMID: 37827428 DOI: 10.1016/j.neuropsychologia.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 10/14/2023]
Abstract
Co-speech gestures are integral to human communication and exhibit diverse forms, each serving a distinct communication function. However, existing literature has focused on individual gesture types, leaving a gap in understanding the comparative neural processing of these diverse forms. To address this, our study investigated the neural processing of two types of iconic gestures: those representing attributes or event knowledge of entity concepts, beat gestures enacting rhythmic manual movements without semantic information, and self-adaptors. During functional magnetic resonance imaging, systematic randomization and attentive observation of video stimuli revealed a general neural substrate for co-speech gesture processing primarily in the bilateral middle temporal and inferior parietal cortices, characterizing visuospatial attention, semantic integration of cross-modal information, and multisensory processing of manual and audiovisual inputs. Specific types of gestures and grooming movements elicited distinct neural responses. Greater activity in the right supramarginal and inferior frontal regions was specific to self-adaptors, and is relevant to the spatiomotor and integrative processing of speech and gestures. The semantic and sensorimotor regions were least active for beat gestures. The processing of attribute gestures was most pronounced in the left posterior middle temporal gyrus upon access to knowledge of entity concepts. This fMRI study illuminated the neural underpinnings of gesture-speech integration and highlighted the differential processing pathways for various co-speech gestures.
Collapse
Affiliation(s)
- Kawai Chui
- Department of English, National Chengchi University, Taipei, Taiwan; Research Centre for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan
| | - Chan-Tat Ng
- Department of Psychology, National Chengchi University, Taipei, Taiwan
| | - Ting-Ting Chang
- Research Centre for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan; Department of Psychology, National Chengchi University, Taipei, Taiwan.
| |
Collapse
|
7
|
Ruiz-Torras S, Gudayol-Ferré E, Fernández-Vazquez O, Cañete-Massé C, Peró-Cebollero M, Guàrdia-Olmos J. Hypoconnectivity networks in schizophrenia patients: A voxel-wise meta-analysis of Rs-fMRI. Int J Clin Health Psychol 2023; 23:100395. [PMID: 37533450 PMCID: PMC10392089 DOI: 10.1016/j.ijchp.2023.100395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/05/2023] [Indexed: 08/04/2023] Open
Abstract
In recent years several meta-analyses regarding resting-state functional connectivity in patients with schizophrenia have been published. The authors have used different data analysis techniques: regional homogeneity, seed-based data analysis, independent component analysis, and amplitude of low frequencies. Hence, we aim to perform a meta-analysis to identify connectivity networks with different activation patterns between people diagnosed with schizophrenia and healthy controls using voxel-wise analysis. METHOD We collected primary studies exploring whole brain connectivity by functional magnetic resonance imaging at rest in patients with schizophrenia compared with healthy controls. We identified 25 studies included high-quality studies that included 1285 patients with schizophrenia and 1279 healthy controls. RESULTS The results indicate hypoactivation in the right precentral gyrus and the left superior temporal gyrus of patients with schizophrenia compared with healthy controls. CONCLUSIONS These regions have been linked with some clinical symptoms usually present in Plea with schizophrenia, such as auditory verbal hallucinations, formal thought disorder, and the comprehension and production of gestures.
Collapse
Affiliation(s)
- Silvia Ruiz-Torras
- Clínica Psicològica de la Universitat de Barcelona, Fundació Josep Finestres, Universitat de Barcelona, Spain
| | | | | | - Cristina Cañete-Massé
- Facultat de Psicologia, Secció de Psicologia Quantitativa, Universitat de Barcelona, Spain
- UB Institute of Complex Systems, Universitat de Barcelona, Spain
| | - Maribel Peró-Cebollero
- Facultat de Psicologia, Secció de Psicologia Quantitativa, Universitat de Barcelona, Spain
- UB Institute of Complex Systems, Universitat de Barcelona, Spain
- Institute of Neuroscience, Universitat de Barcelona, Spain
| | - Joan Guàrdia-Olmos
- Facultat de Psicologia, Secció de Psicologia Quantitativa, Universitat de Barcelona, Spain
- UB Institute of Complex Systems, Universitat de Barcelona, Spain
- Institute of Neuroscience, Universitat de Barcelona, Spain
| |
Collapse
|
8
|
Kelly SD, Ngo Tran QA. Exploring the Emotional Functions of Co-Speech Hand Gesture in Language and Communication. Top Cogn Sci 2023. [PMID: 37115518 DOI: 10.1111/tops.12657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Research over the past four decades has built a convincing case that co-speech hand gestures play a powerful role in human cognition . However, this recent focus on the cognitive function of gesture has, to a large extent, overlooked its emotional role-a role that was once central to research on bodily expression. In the present review, we first give a brief summary of the wealth of research demonstrating the cognitive function of co-speech gestures in language acquisition, learning, and thinking. Building on this foundation, we revisit the emotional function of gesture across a wide range of communicative contexts, from clinical to artistic to educational, and spanning diverse fields, from cognitive neuroscience to linguistics to affective science. Bridging the cognitive and emotional functions of gesture highlights promising avenues of research that have varied practical and theoretical implications for human-machine interactions, therapeutic interventions, language evolution, embodied cognition, and more.
Collapse
Affiliation(s)
- Spencer D Kelly
- Department of Psychological and Brain Sciences, Center for Language and Brain, Colgate University, 13 Oak Dr., Hamilton, NY, 13346, United States
| | - Quang-Anh Ngo Tran
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405, United States
| |
Collapse
|
9
|
Johnstone KL, Blades M, Martin C. No gesture too small: An investigation into the ability of gestural information to mislead eyewitness accounts by 5- to 8-year-olds. Mem Cognit 2023:10.3758/s13421-023-01396-5. [PMID: 36995574 PMCID: PMC10368558 DOI: 10.3758/s13421-023-01396-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2023] [Indexed: 03/31/2023]
Abstract
The accuracy of eyewitness interviews has legal and clinical implications within the criminal justice system. Leading verbal suggestions have been shown to give rise to false memories and inaccurate testimonies in children, but only a small body of research exists regarding non-verbal communication. The present study examined whether 5- to 8-year-olds in the UK could be misled about their memory of an event through exposure to leading gestural information, which suggested an incorrect response, using a variety of question and gesture types. Results showed that leading gestures significantly corrupted participants' memory compared to the control group (MD = 0.60, p < 0.001), with participants being misled by at least one question nearly three-quarters of the time. Questions about peripheral details, and gestures that were more visible and expressive, increased false memory further, with even subtle gestures demonstrating a strong misleading influence. We discuss the implications of these findings for the guidelines governing eyewitness interviews.
Collapse
Affiliation(s)
- Kirsty L Johnstone
- Department of Psychology, Cathedral Court, The University of Sheffield, 1 Vicar Lane, Sheffield, S1 2LT, UK.
| | - Mark Blades
- Department of Psychology, Cathedral Court, The University of Sheffield, 1 Vicar Lane, Sheffield, S1 2LT, UK
| | - Chris Martin
- Department of Psychology, Cathedral Court, The University of Sheffield, 1 Vicar Lane, Sheffield, S1 2LT, UK
| |
Collapse
|
10
|
Pavlidou A, Chapellier V, Maderthaner L, von Känel S, Walther S. Using dynamic point light display stimuli to assess gesture deficits in schizophrenia. Schizophr Res Cogn 2022; 28:100240. [PMID: 35242609 PMCID: PMC8866720 DOI: 10.1016/j.scog.2022.100240] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/27/2022] [Accepted: 01/27/2022] [Indexed: 12/29/2022]
Abstract
Background Gesture deficits are ubiquitous in schizophrenia patients contributing to poor social communication and functional outcome. Given the dynamic nature of social communications, the current study aimed to explore the underlying socio-cognitive processes associated with point-light-displays (PLDs) of communicative gestures in the absence of any other confounding visual characteristics, and compare them to other well-established stimuli of gestures such as pictures by examining their association with symptom severity and motor-cognitive modalities. Methods We included 39-stable schizophrenia outpatients and 27-age-gender matched controls and assessed gesture processing using two tasks. The first task used static stimuli of pictures of a person performing a gesture. The limbs executing the gesture were missing and participants' task was to choose the correct gesture from three-options provided. The second task included videos of dynamic PLDs interacting with each other. One PLD performed communicative gestures, while the other PLD imitated/followed these performed gestures. Participants had to indicate, which of the two PLDs was imitating/following the other. Additionally, we evaluated symptom severity, as well as, motor and cognitive parameters. Results Patients underperformed in both gesture tasks compared to controls. Task performance for static stimuli was associated with blunted affect, motor coordination and sequencing domains, while PLD performance was associated with expressive gestures and sensory integration processes. Discussion Gesture representations of static and dynamic stimuli are associated with distinct processes contributing to poor social communication in schizophrenia, requiring novel therapeutic interventions. Such stimuli can easily be applied remotely for screening socio-cognitive deficits in schizophrenia.
Collapse
Affiliation(s)
- Anastasia Pavlidou
- Corresponding author at: Psychiatric Services University of Bern, University Hospital of Psychiatry and Psychotherapy, Division of Systems Neuroscience of Psychopathology, Translational Research Center, Bollingerstr. 111, CH-3000 Bern 60, Switzerland.
| | | | | | | | | |
Collapse
|
11
|
Brain Dynamics of Action Monitoring in Higher-Order Motor Control Disorders: The Case of Apraxia. eNeuro 2022; 9:ENEURO.0334-20.2021. [PMID: 35105660 PMCID: PMC8896553 DOI: 10.1523/eneuro.0334-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 12/05/2021] [Accepted: 12/15/2021] [Indexed: 11/24/2022] Open
Abstract
Limb apraxia (LA) refers to a high-order motor disorder characterized by the inability to reproduce transitive actions on commands or after observation. Studies demonstrate that action observation and action execution activate the same networks in the human brain, and provides an onlooker’s motor system with appropriate cognitive, motor and sensory-motor cues to flexibly implementing action-sequences and gestures. Tellingly, the temporal dynamics of action monitoring has never been explored in people suffering from LA. To fill this gap, we studied the electro-cortical signatures of error observation in human participants suffering from acquired left-brain lesions with (LA+) and without (LA–) LA, and in a group of healthy controls (H). EEG was acquired while participants observed from a first-person perspective (1PP) an avatar performing correct or incorrect reach-to-grasp a glass action in an immersive-virtual environment. Alterations of typical EEG signatures of error observation in time (early error positivity; Pe) and time-frequency domain (theta band-power) were found reduced in LA+ compared with H. Connectivity analyses showed that LA+ exhibited a decreased theta phase synchronization of both the frontoparietal and frontofrontal network, compared with H and LA–. Moreover, linear regression analysis revealed that the severity of LA [test of upper LA (TULIA) scores] was predicted by mid-frontal error-related theta activity, suggesting a link between error monitoring capacity and apraxic phenotypes. These results provide novel neurophysiological evidence of altered neurophysiological dynamics of action monitoring in individuals with LA and shed light on the performance monitoring changes occurring in this disorder.
Collapse
|
12
|
Kotila A, Tohka J, Kauppi JP, Gabbatore I, Mäkinen L, Hurtig TM, Ebeling HE, Korhonen V, Kiviniemi VJ, Loukusa S. Neural-level associations of non-verbal pragmatic comprehension in young Finnish autistic adults. Int J Circumpolar Health 2021; 80:1909333. [PMID: 34027832 PMCID: PMC8158210 DOI: 10.1080/22423982.2021.1909333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 03/12/2021] [Accepted: 03/23/2021] [Indexed: 11/18/2022] Open
Abstract
This video-based study examines the pragmatic non-verbal comprehension skills and corresponding neural-level findings in young Finnish autistic adults, and controls. Items from the Assessment Battery of Communication (ABaCo) were chosen to evaluate the comprehension of non-verbal communication. Inter-subject correlation (ISC) analysis of the functional magnetic resonance imaging data was used to reveal the synchrony of brain activation across participants during the viewing of pragmatically complex scenes of ABaCo videos. The results showed a significant difference between the ISC maps of the autistic and control groups in tasks involving the comprehension of non-verbal communication, thereby revealing several brain regions where correlation of brain activity was greater within the control group. The results suggest a possible weaker modulation of brain states in response to the pragmatic non-verbal communicative situations in autistic participants. Although there was no difference between the groups in behavioural responses to ABaCo items, there was more variability in the accuracy of the responses in the autistic group. Furthermore, mean answering and reaction times correlated with the severity of autistic traits. The results indicate that even if young autistic adults may have learned to use compensatory resources in their communicative-pragmatic comprehension, pragmatic processing in naturalistic situations still requires additional effort.
Collapse
Affiliation(s)
- Aija Kotila
- Faculty of Humanities, Research Unit of Logopedics, University of Oulu, Oulu, Finland
| | - Jussi Tohka
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Jukka-Pekka Kauppi
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland
| | - Ilaria Gabbatore
- Faculty of Humanities, Research Unit of Logopedics, University of Oulu, Oulu, Finland
- Department of Psychology, University of Turin, Turin, Italy
| | - Leena Mäkinen
- Faculty of Humanities, Research Unit of Logopedics, University of Oulu, Oulu, Finland
| | - Tuula M. Hurtig
- Clinic of Child Psychiatry, Oulu University Hospital and PEDEGO Research Unit, University of Oulu, Oulu, Finland
- Research Unit of Clinical Neuroscience, Psychiatry, University of Oulu
| | - Hanna E. Ebeling
- Clinic of Child Psychiatry, Oulu University Hospital and PEDEGO Research Unit, University of Oulu, Oulu, Finland
| | - Vesa Korhonen
- Department of Diagnostic Radiology, Medical Research Center, Oulu University Hospital and Research Unit of Medical Imaging, Physics and Technology, the Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Vesa J. Kiviniemi
- Department of Diagnostic Radiology, Medical Research Center, Oulu University Hospital and Research Unit of Medical Imaging, Physics and Technology, the Faculty of Medicine, University of Oulu, Oulu, Finland
- Oulu Functional NeuroImaging-lab, Medical Research Center, Oulu University Hospital, Oulu, Finland
| | - Soile Loukusa
- Faculty of Humanities, Research Unit of Logopedics, University of Oulu, Oulu, Finland
| |
Collapse
|
13
|
Perniss P, Vinson D, Vigliocco G. Making Sense of the Hands and Mouth: The Role of "Secondary" Cues to Meaning in British Sign Language and English. Cogn Sci 2021; 44:e12868. [PMID: 32619055 DOI: 10.1111/cogs.12868] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 05/01/2020] [Accepted: 05/06/2020] [Indexed: 01/06/2023]
Abstract
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent "secondary" cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.
Collapse
Affiliation(s)
| | - David Vinson
- Division of Psychology and Language Sciences, University College London
| | | |
Collapse
|
14
|
Momsen J, Gordon J, Wu YC, Coulson S. Event related spectral perturbations of gesture congruity: Visuospatial resources are recruited for multimodal discourse comprehension. BRAIN AND LANGUAGE 2021; 216:104916. [PMID: 33652372 PMCID: PMC11296609 DOI: 10.1016/j.bandl.2021.104916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 11/30/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Here we examine the role of visuospatial working memory (WM) during the comprehension of multimodal discourse with co-speech iconic gestures. EEG was recorded as healthy adults encoded either a sequence of one (low load) or four (high load) dot locations on a grid and rehearsed them until a free recall response was collected later in the trial. During the rehearsal period of the WM task, participants observed videos of a speaker describing objects in which half of the trials included semantically related co-speech gestures (congruent), and the other half included semantically unrelated gestures (incongruent). Discourse processing was indexed by oscillatory EEG activity in the alpha and beta bands during the videos. Across all participants, effects of speech and gesture incongruity were more evident in low load trials than in high load trials. Effects were also modulated by individual differences in visuospatial WM capacity. These data suggest visuospatial WM resources are recruited in the comprehension of multimodal discourse.
Collapse
Affiliation(s)
- Jacob Momsen
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States
| | - Jared Gordon
- Cognitive Science Department, UC San Diego, United States
| | - Ying Choon Wu
- Swartz Center for Computational Neuroscience, UC San Diego, United States
| | - Seana Coulson
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States; Cognitive Science Department, UC San Diego, United States.
| |
Collapse
|
15
|
Kotila A, Hyvärinen A, Mäkinen L, Leinonen E, Hurtig T, Ebeling H, Korhonen V, Kiviniemi VJ, Loukusa S. Processing of pragmatic communication in ASD: a video-based brain imaging study. Sci Rep 2020; 10:21739. [PMID: 33303942 PMCID: PMC7729953 DOI: 10.1038/s41598-020-78874-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/30/2020] [Indexed: 01/24/2023] Open
Abstract
Social and pragmatic difficulties in autism spectrum disorder (ASD) are widely recognized, although their underlying neural level processing is not well understood. The aim of this study was to examine the activity of the brain network components linked to social and pragmatic understanding in order to reveal whether complex socio-pragmatic events evoke differences in brain activity between the ASD and control groups. Nineteen young adults (mean age 23.6 years) with ASD and 19 controls (mean age 22.7 years) were recruited for the study. The stimulus data consisted of video clips showing complex social events that demanded processing of pragmatic communication. In the analysis, the functional magnetic resonance imaging signal responses of the selected brain network components linked to social and pragmatic information processing were compared. Although the processing of the young adults with ASD was similar to that of the control group during the majority of the social scenes, differences between the groups were found in the activity of the social brain network components when the participants were observing situations with concurrent verbal and non-verbal communication events. The results suggest that the ASD group had challenges in processing concurrent multimodal cues in complex pragmatic communication situations.
Collapse
Affiliation(s)
- Aija Kotila
- Research Unit of Logopedics, Faculty of Humanities, University of Oulu, Oulu, Finland.
| | - Aapo Hyvärinen
- Department of Computer Science, University of Helsinki, Helsinki, Finland
| | - Leena Mäkinen
- Research Unit of Logopedics, Faculty of Humanities, University of Oulu, Oulu, Finland
| | - Eeva Leinonen
- Office of the Vice Chancellor, Murdoch University, Murdoch, WA, Australia
| | - Tuula Hurtig
- Research Unit of Clinical Neuroscience, Psychiatry, University of Oulu, Oulu, Finland
- PEDEGO Research Unit, The Faculty of Medicine, University of Oulu, Oulu, Finland
- Department of Child Psychiatry, Faculty of Medicine, Institute of Clinical Medicine, Oulu University Hospital, Oulu, Finland
| | - Hanna Ebeling
- PEDEGO Research Unit, The Faculty of Medicine, University of Oulu, Oulu, Finland
- Department of Child Psychiatry, Faculty of Medicine, Institute of Clinical Medicine, Oulu University Hospital, Oulu, Finland
| | - Vesa Korhonen
- Department of Diagnostic Radiology, Medical Research Center (MRC), University and University Hospital of Oulu, Oulu, Finland
- Research Unit of Medical Imaging, Physics and Technology, The Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Vesa J Kiviniemi
- Department of Diagnostic Radiology, Medical Research Center (MRC), University and University Hospital of Oulu, Oulu, Finland
- Research Unit of Medical Imaging, Physics and Technology, The Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Soile Loukusa
- Research Unit of Logopedics, Faculty of Humanities, University of Oulu, Oulu, Finland
| |
Collapse
|
16
|
Momsen J, Gordon J, Wu YC, Coulson S. Verbal working memory and co-speech gesture processing. Brain Cogn 2020; 146:105640. [PMID: 33171343 PMCID: PMC11299644 DOI: 10.1016/j.bandc.2020.105640] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 09/21/2020] [Accepted: 10/19/2020] [Indexed: 12/15/2022]
Abstract
Multimodal discourse requires an assembly of cognitive processes that are uniquely recruited for language comprehension in social contexts. In this study, we investigated the role of verbal working memory for the online integration of speech and iconic gestures. Participants memorized and rehearsed a series of auditorily presented digits in low (one digit) or high (four digits) memory load conditions. To observe how verbal working memory load impacts online discourse comprehension, ERPs were recorded while participants watched discourse videos containing either congruent or incongruent speech-gesture combinations during the maintenance portion of the memory task. While expected speech-gesture congruity effects were found in the low memory load condition, high memory load trials elicited enhanced frontal positivities that indicated a unique interaction between online speech-gesture integration and the availability of verbal working memory resources. This work contributes to an understanding of discourse comprehension by demonstrating that language processing in a multimodal context is subject to the relationship between cognitive resource availability and the degree of controlled processing required for task performance. We suggest that verbal working memory is less important for speech-gesture integration than it is for mediating speech processing under high task demands.
Collapse
Affiliation(s)
- Jacob Momsen
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States
| | - Jared Gordon
- Cognitive Science Department, UC San Diego, United States
| | - Ying Choon Wu
- Swartz Center for Computational Neuroscience, UC San Diego, United States
| | - Seana Coulson
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States; Cognitive Science Department, UC San Diego, United States.
| |
Collapse
|
17
|
Wüthrich F, Viher PV, Stegmayer K, Federspiel A, Bohlhalter S, Vanbellingen T, Wiest R, Walther S. Dysbalanced Resting-State Functional Connectivity Within the Praxis Network Is Linked to Gesture Deficits in Schizophrenia. Schizophr Bull 2020; 46:905-915. [PMID: 32052844 PMCID: PMC7342100 DOI: 10.1093/schbul/sbaa008] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Patients with schizophrenia frequently present deficits in gesture production and interpretation, greatly affecting their communication skills. As these gesture deficits can be found early in the course of illness and as they can predict later outcomes, exploring their neural basis may lead to a better understanding of schizophrenia. While gesturing has been reported to rely on a left lateralized network of brain regions, termed praxis network, in healthy subjects and lesioned patients, studies in patients with schizophrenia are sparse. It is currently unclear whether within-network connectivity at rest is linked to gesture deficit. Here, we compared the functional connectivity between regions of the praxis network at rest between 46 patients and 44 healthy controls. All participants completed a validated test of hand gesture performance before resting-state functional magnetic resonance imaging (fMRI) was acquired. Patients performed gestures poorer than controls in all categories and domains. In patients, we also found significantly higher resting-state functional connectivity between left precentral gyrus and bilateral superior and inferior parietal lobule. Likewise, patients had higher connectivity from right precentral gyrus to left inferior and bilateral superior parietal lobule (SPL). In contrast, they exhibited lower connectivity between bilateral superior temporal gyrus (STG). Connectivity between right precentral gyrus and left SPL, as well as connectivity between bilateral STG, correlated with gesture performance in healthy controls. We failed to detect similar correlations in patients. We suggest that altered resting-state functional connectivity within the praxis network perturbs correct gesture planning in patients, reflecting the gesture deficit often seen in schizophrenia.
Collapse
Affiliation(s)
- Florian Wüthrich
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland,To whom correspondence should be addressed; University Hospital of Psychiatry, Translational Research Center, Bolligenstrasse 111, 3000 Bern 60, Switzerland; tel: +41-31-932-87-13, fax: +41 31 930 99 61, e-mail:
| | - Petra V Viher
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Katharina Stegmayer
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Andrea Federspiel
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Stephan Bohlhalter
- Neurology and Neurorehabilitation Center, Kantonsspital Luzern, Luzern, Switzerland,Department of Clinical Research, University Hospital of Bern, Inselspital, Bern, Switzerland
| | - Tim Vanbellingen
- Neurology and Neurorehabilitation Center, Kantonsspital Luzern, Luzern, Switzerland,Department of Clinical Research, University Hospital of Bern, Inselspital, Bern, Switzerland
| | - Roland Wiest
- Institute of Neuroradiology, University Hospital of Bern, Inselspital, Bern, Switzerland
| | - Sebastian Walther
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| |
Collapse
|
18
|
Tholen MG, Trautwein FM, Böckler A, Singer T, Kanske P. Functional magnetic resonance imaging (fMRI) item analysis of empathy and theory of mind. Hum Brain Mapp 2020; 41:2611-2628. [PMID: 32115820 PMCID: PMC7294056 DOI: 10.1002/hbm.24966] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 02/05/2020] [Accepted: 02/11/2020] [Indexed: 12/21/2022] Open
Abstract
In contrast to conventional functional magnetic resonance imaging (fMRI) analysis across participants, item analysis allows generalizing the observed neural response patterns from a specific stimulus set to the entire population of stimuli. In the present study, we perform an item analysis on an fMRI paradigm (EmpaToM) that measures the neural correlates of empathy and Theory of Mind (ToM). The task includes a large stimulus set (240 emotional vs. neutral videos to probe empathic responding and 240 ToM or factual reasoning questions to probe ToM), which we tested in two large participant samples (N = 178, N = 130). Both, the empathy‐related network comprising anterior insula, anterior cingulate/dorsomedial prefrontal cortex, inferior frontal gyrus, and dorsal temporoparietal junction/supramarginal gyrus (TPJ) and the ToM related network including ventral TPJ, superior temporal gyrus, temporal poles, and anterior and posterior midline regions, were observed across participants and items. Regression analyses confirmed that these activations are predicted by the empathy or ToM condition of the stimuli, but not by low‐level features such as video length, number of words, syllables or syntactic complexity. The item analysis also allowed for the selection of the most effective items to create optimized stimulus sets that provide the most stable and reproducible results. Finally, reproducibility was shown in the replication of all analyses in the second participant sample. The data demonstrate (a) the generalizability of empathy and ToM related neural activity and (b) the reproducibility of the EmpaToM task and its applicability in intervention and clinical imaging studies.
Collapse
Affiliation(s)
- Matthias G Tholen
- Centre for Cognitive Neuroscience, Department of Psychology, University of Salzburg, Austria
| | | | - Anne Böckler
- Department of Psychology, Leibniz University Hannover, Hannover, Germany
| | - Tania Singer
- Max Planck Society, Social Neuroscience Lab, Berlin, Germany
| | - Philipp Kanske
- Clinical Psychology and Behavioral Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Research Group Social Stress and Family Health, Leipzig, Germany
| |
Collapse
|
19
|
Fasola A, Alario FX, Tellier M, Giusiano B, Tassinari CA, Bartolomei F, Trébuchon A. A description of verbal and gestural communication during postictal aphasia. Epilepsy Behav 2020; 102:106646. [PMID: 31759317 DOI: 10.1016/j.yebeh.2019.106646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 09/18/2019] [Accepted: 10/05/2019] [Indexed: 11/25/2022]
Abstract
Patients suffering from drug-resistant temporal lobe epilepsy show substantial language deficits (i.e., anomia) during their seizures and in the postictal period (postictal aphasia). Verbal impairments observed during the postictal period may be studied to help localizing the epileptogenic zone. These explorations have been essentially based on simple tasks focused on speech, thus disregarding the multimodal nature of verbal communication, particularly the fact that, when speakers want to communicate, they often produce gestures of various kinds. Here, we propose an innovative procedure for testing postictal language and communication abilities, including the assessment of co-speech gestures. We provide a preliminary description of the changes induced on communication during postictal aphasia. We studied 21 seizures that induced postictal aphasia from 12 patients with drug-refractory epilepsy, including left temporal and left frontal seizures. The experimental task required patients to memorize a highly detailed picture and, briefly after, to describe what they had seen, thus eliciting a communicative meaningful monologue. This allowed comparing verbal communication in postictal and interictal conditions within the same individuals. Co-speech gestures were coded according to two categories: "Rhythmic" gestures, thought to be produced in support of speech building, and "illustrative" gestures, thought to be produced to complement the speech content. When postictal and interictal conditions were compared, there was decreased speech flow along with an increase of rhythmic gesture production at the expense of illustrative gesture production. The communication patterns did not differ significantly after temporal and frontal seizures, yet they were illustrated separately, owing to the clinical importance of the distinction, along with considerations of interindividual variability. A contrast between rhythmic and illustrative gestures production is congruent with previous literature in which rhythmic gestures have been linked to lexical retrieval processes. If confirmed in further studies, such evidence for a facilitative role of co-speech gestures in language difficulties could be put to use in the context of multimodal language therapies.
Collapse
Affiliation(s)
- Alexia Fasola
- Institute of Language, Communication and the Brain (ANR-16- CONV-0002) - ILCB, Aix-Marseille Univ., France.
| | | | - Marion Tellier
- Aix Marseille Univ., CNRS, LPL, Aix-en-Provence, France.
| | - Bernard Giusiano
- Aix Marseille Univ., INSERM, INS, Inst Neurosci Syst, Marseille, France.
| | | | - Fabrice Bartolomei
- Aix Marseille Univ., INSERM, INS, Inst Neurosci Syst, Marseille, France.
| | - Agnès Trébuchon
- Aix Marseille Univ., INSERM, INS, Inst Neurosci Syst, Marseille, France.
| |
Collapse
|
20
|
Gestures convey different physiological responses when performed toward and away from the body. Sci Rep 2019; 9:12862. [PMID: 31492887 PMCID: PMC6731307 DOI: 10.1038/s41598-019-49318-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 08/08/2019] [Indexed: 11/23/2022] Open
Abstract
We assessed the sympathetic and parasympathetic activation associated to the observation of Pantomime (i.e. the mime of the use of a tool) and Intransitive gestures (i.e. expressive) performed toward (e.g. a comb and “thinking”) and away from the body (e.g. key and “come here”) in a group of healthy participants while both pupil dilation (N = 31) and heart rate variability (N = 33; HF-HRV) were recorded. Large pupil dilation was observed in both Pantomime and Intransitive gestures toward the body; whereas an increase of the vagal suppression was observed in Intransitive gestures away from the body but not in those toward the body. Our results suggest that the space where people act when performing a gesture has an impact on the physiological responses of the observer in relation to the type of social communicative information that the gesture direction conveys, from a more intimate (toward the body) to a more interactive one (away from the body).
Collapse
|
21
|
Jouravlev O, Zheng D, Balewski Z, Le Arnz Pongos A, Levan Z, Goldin-Meadow S, Fedorenko E. Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia 2019; 132:107132. [PMID: 31276684 PMCID: PMC6708375 DOI: 10.1016/j.neuropsychologia.2019.107132] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 06/01/2019] [Accepted: 06/30/2019] [Indexed: 12/15/2022]
Abstract
Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Collapse
Affiliation(s)
- Olessia Jouravlev
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; Carleton University, Ottawa, ON K1S 5B6, Canada.
| | - David Zheng
- Princeton University, Princeton, NJ, 08544, USA
| | - Zuzanna Balewski
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | | | - Zena Levan
- University of Chicago, Chicago, IL, 60637, USA
| | | | - Evelina Fedorenko
- Massachusetts Institute of Technology, Cambridge, MA, 02139, USA; McGovern Institute for Brain Research, Cambridge, MA, 02139, USA; Massachusetts General Hospital, Boston, MA, 02114, USA.
| |
Collapse
|
22
|
Murteira A, Sowman PF, Nickels L. Does TMS Disruption of the Left Primary Motor Cortex Affect Verb Retrieval Following Exposure to Pantomimed Gestures? Front Neurosci 2019; 12:920. [PMID: 30618552 PMCID: PMC6299802 DOI: 10.3389/fnins.2018.00920] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Accepted: 11/23/2018] [Indexed: 11/17/2022] Open
Abstract
Previous research suggests that meaning-laden gestures, even when produced in the absence of language (i.e., pantomimed gestures), influence lexical retrieval. Yet, little is known about the neural mechanisms that underlie this process. Based on embodied cognition theories, many studies have demonstrated motor cortex involvement in the representation of action verbs and in the understanding of actions. The present study aimed to investigate whether the motor system plays a critical role in the behavioral influence of pantomimed gestures on action naming. Continuous theta burst stimulation (cTBS) was applied over the hand area of the left primary motor cortex and to a control site (occipital cortex). An action-picture naming task followed cTBS. In the naming task, participants named action pictures that were preceded by videos of congruent pantomimed gestures, unrelated pantomimed gestures or a control video with no movement (as a neutral, non-gestural condition). In addition to behavioral measures of performance, cTBS-induced changes in corticospinal activity were assessed. We replicated previous finding that exposure to congruent pantomimed gestures facilitates word production, compared to unrelated or neutral primes. However, we found no evidence that the left primary motor area is crucially involved in the mechanism underlying behavioral facilitation effects of gesture on verb production. Although, at the group level, cTBS induced motor cortex suppression, at the individual level we found remarkable variability of cTBS effects on the motor cortex. We found cTBS induction of both inhibition of corticospinal activity (with slower behavioral of responses) and enhancement (with faster behavioral responses). Our findings cast doubt on assumptions that the motor cortex is causally involved in the impact of gestures on action-word processing. Our results also highlight the importance of careful consideration of interindividual variability for the interpretation of cTBS effects.
Collapse
Affiliation(s)
- Ana Murteira
- ARC Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia.,International Doctorate of Experimental Approaches to Language and Brain (IDEALAB), Macquarie University, Sydney, NSW, Australia
| | - Paul F Sowman
- ARC Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia.,Perception in Action Research Centre, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
| | - Lyndsey Nickels
- ARC Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
23
|
Bracci S, Caramazza A, Peelen MV. View-invariant representation of hand postures in the human lateral occipitotemporal cortex. Neuroimage 2018; 181:446-452. [DOI: 10.1016/j.neuroimage.2018.07.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 05/28/2018] [Accepted: 07/01/2018] [Indexed: 12/14/2022] Open
|
24
|
Pritchett BL, Hoeflin C, Koldewyn K, Dechter E, Fedorenko E. High-level language processing regions are not engaged in action observation or imitation. J Neurophysiol 2018; 120:2555-2570. [PMID: 30156457 DOI: 10.1152/jn.00222.2018] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
A set of left frontal, temporal, and parietal brain regions respond robustly during language comprehension and production (e.g., Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010; Menenti L, Gierhan SM, Segaert K, Hagoort P. Psychol Sci 22: 1173-1182, 2011). These regions have been further shown to be selective for language relative to other cognitive processes, including arithmetic, aspects of executive function, and music perception (e.g., Fedorenko E, Behr MK, Kanwisher N. Proc Natl Acad Sci USA 108: 16428-16433, 2011; Monti MM, Osherson DN. Brain Res 1428: 33-42, 2012). However, one claim about overlap between language and nonlinguistic cognition remains prominent. In particular, some have argued that language processing shares computational demands with action observation and/or execution (e.g., Rizzolatti G, Arbib MA. Trends Neurosci 21: 188-194, 1998; Koechlin E, Jubault T. Neuron 50: 963-974, 2006; Tettamanti M, Weniger D. Cortex 42: 491-494, 2006). However, the evidence for these claims is indirect, based on observing activation for language and action tasks within the same broad anatomical areas (e.g., on the lateral surface of the left frontal lobe). To test whether language indeed shares machinery with action observation/execution, we examined the responses of language brain regions, defined functionally in each individual participant (Fedorenko E, Hsieh PJ, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. J Neurophysiol 104: 1177-1194, 2010) to action observation ( experiments 1, 2, and 3a) and action imitation ( experiment 3b). With the exception of the language region in the angular gyrus, all language regions, including those in the inferior frontal gyrus (within "Broca's area"), showed little or no response during action observation/imitation. These results add to the growing body of literature suggesting that high-level language regions are highly selective for language processing (see Fedorenko E, Varley R. Ann NY Acad Sci 1369: 132-153, 2016 for a review). NEW & NOTEWORTHY Many have argued for overlap in the machinery used to interpret language and others' actions, either because action observation was a precursor to linguistic communication or because both require interpreting hierarchically-structured stimuli. However, existing evidence is indirect, relying on group analyses or reverse inference. We examined responses to action observation in language regions defined functionally in individual participants and found no response. Thus language comprehension and action observation recruit distinct circuits in the modern brain.
Collapse
Affiliation(s)
- Brianna L Pritchett
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Caitlyn Hoeflin
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Kami Koldewyn
- School of Psychology, Bangor University, Gwynedd, United Kingdom
| | - Eyal Dechter
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences/McGovern Institute for Brain Research, Massachusetts Institute of Technology , Cambridge, Massachusetts.,Department of Psychiatry, Massachusetts General Hospital, Charlestown, Massachusetts.,Department of Psychiatry, Harvard Medical School , Boston, Massachusetts
| |
Collapse
|
25
|
Wolf D, Mittelberg I, Rekittke LM, Bhavsar S, Zvyagintsev M, Haeck A, Cong F, Klasen M, Mathiak K. Interpretation of Social Interactions: Functional Imaging of Cognitive-Semiotic Categories During Naturalistic Viewing. Front Hum Neurosci 2018; 12:296. [PMID: 30154703 PMCID: PMC6102316 DOI: 10.3389/fnhum.2018.00296] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Accepted: 07/06/2018] [Indexed: 01/01/2023] Open
Abstract
Social interactions arise from patterns of communicative signs, whose perception and interpretation require a multitude of cognitive functions. The semiotic framework of Peirce's Universal Categories (UCs) laid ground for a novel cognitive-semiotic typology of social interactions. During functional magnetic resonance imaging (fMRI), 16 volunteers watched a movie narrative encompassing verbal and non-verbal social interactions. Three types of non-verbal interactions were coded ("unresolved," "non-habitual," and "habitual") based on a typology reflecting Peirce's UCs. As expected, the auditory cortex responded to verbal interactions, but non-verbal interactions modulated temporal areas as well. Conceivably, when speech was lacking, ambiguous visual information (unresolved interactions) primed auditory processing in contrast to learned behavioral patterns (habitual interactions). The latter recruited a parahippocampal-occipital network supporting conceptual processing and associative memory retrieval. Requesting semiotic contextualization, non-habitual interactions activated visuo-spatial and contextual rule-learning areas such as the temporo-parietal junction and right lateral prefrontal cortex. In summary, the cognitive-semiotic typology reflected distinct sensory and association networks underlying the interpretation of observed non-verbal social interactions.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre (HumTec), RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany
| | - Saurabh Bhavsar
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Brain Imaging Facility, Interdisciplinary Centre for Clinical Studies (IZKF), Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Annina Haeck
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Fengyu Cong
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen University, Aachen, Germany.,JARA-Translational Brain Medicine, Aachen, Germany
| |
Collapse
|
26
|
Perniss P. Why We Should Study Multimodal Language. Front Psychol 2018; 9:1109. [PMID: 30002643 PMCID: PMC6032889 DOI: 10.3389/fpsyg.2018.01109] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 06/11/2018] [Indexed: 12/21/2022] Open
Affiliation(s)
- Pamela Perniss
- School of Humanities, University of Brighton, Brighton, United Kingdom
| |
Collapse
|
27
|
Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration. J Neurosci 2018; 38:1891-1900. [PMID: 29358361 DOI: 10.1523/jneurosci.1748-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Revised: 12/20/2017] [Accepted: 12/23/2017] [Indexed: 01/15/2023] Open
Abstract
Language and action naturally occur together in the form of cospeech gestures, and there is now convincing evidence that listeners display a strong tendency to integrate semantic information from both domains during comprehension. A contentious question, however, has been which brain areas are causally involved in this integration process. In previous neuroimaging studies, left inferior frontal gyrus (IFG) and posterior middle temporal gyrus (pMTG) have emerged as candidate areas; however, it is currently not clear whether these areas are causally or merely epiphenomenally involved in gesture-speech integration. In the present series of experiments, we directly tested for a potential critical role of IFG and pMTG by observing the effect of disrupting activity in these areas using transcranial magnetic stimulation in a mixed gender sample of healthy human volunteers. The outcome measure was performance on a Stroop-like gesture task (Kelly et al., 2010a), which provides a behavioral index of gesture-speech integration. Our results provide clear evidence that disrupting activity in IFG and pMTG selectively impairs gesture-speech integration, suggesting that both areas are causally involved in the process. These findings are consistent with the idea that these areas play a joint role in gesture-speech integration, with IFG regulating strategic semantic access via top-down signals acting upon temporal storage areas.SIGNIFICANCE STATEMENT Previous neuroimaging studies suggest an involvement of inferior frontal gyrus and posterior middle temporal gyrus in gesture-speech integration, but findings have been mixed and due to methodological constraints did not allow inferences of causality. By adopting a virtual lesion approach involving transcranial magnetic stimulation, the present study provides clear evidence that both areas are causally involved in combining semantic information arising from gesture and speech. These findings support the view that, rather than being separate entities, gesture and speech are part of an integrated multimodal language system, with inferior frontal gyrus and posterior middle temporal gyrus serving as critical nodes of the cortical network underpinning this system.
Collapse
|
28
|
Wolf D, Rekittke LM, Mittelberg I, Klasen M, Mathiak K. Perceived Conventionality in Co-speech Gestures Involves the Fronto-Temporal Language Network. Front Hum Neurosci 2017; 11:573. [PMID: 29249945 PMCID: PMC5714878 DOI: 10.3389/fnhum.2017.00573] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 11/13/2017] [Indexed: 11/16/2022] Open
Abstract
Face-to-face communication is multimodal; it encompasses spoken words, facial expressions, gaze, and co-speech gestures. In contrast to linguistic symbols (e.g., spoken words or signs in sign language) relying on mostly explicit conventions, gestures vary in their degree of conventionality. Bodily signs may have a general accepted or conventionalized meaning (e.g., a head shake) or less so (e.g., self-grooming). We hypothesized that subjective perception of conventionality in co-speech gestures relies on the classical language network, i.e., the left hemispheric inferior frontal gyrus (IFG, Broca's area) and the posterior superior temporal gyrus (pSTG, Wernicke's area) and studied 36 subjects watching video-recorded story retellings during a behavioral and an functional magnetic resonance imaging (fMRI) experiment. It is well documented that neural correlates of such naturalistic videos emerge as intersubject covariance (ISC) in fMRI even without involving a stimulus (model-free analysis). The subjects attended either to perceived conventionality or to a control condition (any hand movements or gesture-speech relations). Such tasks modulate ISC in contributing neural structures and thus we studied ISC changes to task demands in language networks. Indeed, the conventionality task significantly increased covariance of the button press time series and neuronal synchronization in the left IFG over the comparison with other tasks. In the left IFG, synchronous activity was observed during the conventionality task only. In contrast, the left pSTG exhibited correlated activation patterns during all conditions with an increase in the conventionality task at the trend level only. Conceivably, the left IFG can be considered a core region for the processing of perceived conventionality in co-speech gestures similar to spoken language. In general, the interpretation of conventionalized signs may rely on neural mechanisms that engage during language comprehension.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Linn-Marlen Rekittke
- Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Irene Mittelberg
- Natural Media Lab, Human Technology Centre, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Aachen, Germany.,Center for Sign Language and Gesture (SignGes), RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen, Aachen, Germany
| |
Collapse
|
29
|
Viher PV, Stegmayer K, Kubicki M, Karmacharya S, Lyall AE, Federspiel A, Vanbellingen T, Bohlhalter S, Wiest R, Strik W, Walther S. The cortical signature of impaired gesturing: Findings from schizophrenia. NEUROIMAGE-CLINICAL 2017; 17:213-221. [PMID: 29159038 PMCID: PMC5683189 DOI: 10.1016/j.nicl.2017.10.017] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 09/18/2017] [Accepted: 10/18/2017] [Indexed: 01/09/2023]
Abstract
Schizophrenia is characterized by deficits in gesturing that is important for nonverbal communication. Research in healthy participants and brain-damaged patients revealed a left-lateralized fronto-parieto-temporal network underlying gesture performance. First evidence from structural imaging studies in schizophrenia corroborates these results. However, as of yet, it is unclear if cortical thickness abnormalities contribute to impairments in gesture performance. We hypothesized that patients with deficits in gesture production show cortical thinning in 12 regions of interest (ROIs) of a gesture network relevant for gesture performance and recognition. Forty patients with schizophrenia and 41 healthy controls performed hand and finger gestures as either imitation or pantomime. Group differences in cortical thickness between patients with deficits, patients without deficits, and controls were explored using a multivariate analysis of covariance. In addition, the relationship between gesture recognition and cortical thickness was investigated. Patients with deficits in gesture production had reduced cortical thickness in eight ROIs, including the pars opercularis of the inferior frontal gyrus, the superior and inferior parietal lobes, and the superior and middle temporal gyri. Gesture recognition correlated with cortical thickness in fewer, but mainly the same, ROIs within the patient sample. In conclusion, our results show that impaired gesture production and recognition in schizophrenia is associated with cortical thinning in distinct areas of the gesture network. Impairments in gesture production and recognition in schizophrenia are related to altered brain structure. Brain alterations in schizophrenia are located in areas that are generally damaged in apraxia. Schizophrenia patients with gesture deficits show cortical thinning of several regions in the gesture network. Deficits of gesture production and recognition are both related to a fronto-parieto-temporal gesture network.
Collapse
Affiliation(s)
- Petra Verena Viher
- Translational Research Center, University Hospital of Psychiatry, University of Bern, Bern, Switzerland; Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
| | - Katharina Stegmayer
- Translational Research Center, University Hospital of Psychiatry, University of Bern, Bern, Switzerland
| | - Marek Kubicki
- Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, USA; Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Sarina Karmacharya
- Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Amanda Ellis Lyall
- Department of Psychiatry, Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Andrea Federspiel
- Translational Research Center, University Hospital of Psychiatry, University of Bern, Bern, Switzerland
| | - Tim Vanbellingen
- Department of Clinical Research, Inselspital, Bern, Switzerland; Neurology and Neurorehabilitation Center, Luzerner Kantonsspital, Lucerne, Switzerland; Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
| | - Stephan Bohlhalter
- Neurology and Neurorehabilitation Center, Luzerner Kantonsspital, Lucerne, Switzerland
| | - Roland Wiest
- Support Center of Advanced Neuroimaging, Institute of Neuroradiology, University of Bern, Bern, Switzerland
| | - Werner Strik
- Translational Research Center, University Hospital of Psychiatry, University of Bern, Bern, Switzerland
| | - Sebastian Walther
- Translational Research Center, University Hospital of Psychiatry, University of Bern, Bern, Switzerland
| |
Collapse
|
30
|
Hassanpour MS, Eggebrecht AT, Peelle JE, Culver JP. Mapping effective connectivity within cortical networks with diffuse optical tomography. NEUROPHOTONICS 2017; 4:041402. [PMID: 28744475 PMCID: PMC5521306 DOI: 10.1117/1.nph.4.4.041402] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 06/21/2017] [Indexed: 05/11/2023]
Abstract
Understanding how cortical networks interact in response to task demands is important both for providing insight into the brain's processing architecture and for managing neurological diseases and mental disorders. High-density diffuse optical tomography (HD-DOT) is a neuroimaging technique that offers the significant advantages of having a naturalistic, acoustically controllable environment and being compatible with metal implants, neither of which is possible with functional magnetic resonance imaging. We used HD-DOT to study the effective connectivity and assess the modulatory effects of speech intelligibility and syntactic complexity on functional connections within the cortical speech network. To accomplish this, we extend the use of a generalized psychophysiological interaction (PPI) analysis framework. In particular, we apply PPI methods to event-related HD-DOT recordings of cortical oxyhemoglobin activity during auditory sentence processing. We evaluate multiple approaches for selecting cortical regions of interest and for modeling interactions among these regions. Our results show that using subject-based regions has minimal effect on group-level connectivity maps. We also demonstrate that incorporating an interaction model based on estimated neural activity results in significantly stronger effective connectivity. Taken together our findings support the use of HD-DOT with PPI methods for noninvasively studying task-related modulations of functional connectivity.
Collapse
Affiliation(s)
- Mahlega S. Hassanpour
- Washington University in St. Louis, Department of Physics, St. Louis, Missouri, United States
- Washington University in St. Louis, Department of Radiology, St. Louis, Missouri, United States
- Address all correspondence to: Mahlega S. Hassanpour, E-mail:
| | - Adam T. Eggebrecht
- Washington University in St. Louis, Department of Radiology, St. Louis, Missouri, United States
| | - Jonathan E. Peelle
- Washington University in St. Louis, Department of Otolaryngology, St. Louis, Missouri, United States
| | - Joseph P. Culver
- Washington University in St. Louis, Department of Physics, St. Louis, Missouri, United States
- Washington University in St. Louis, Department of Radiology, St. Louis, Missouri, United States
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| |
Collapse
|
31
|
Weisberg J, Hubbard AL, Emmorey K. Multimodal integration of spontaneously produced representational co-speech gestures: an fMRI study. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 32:158-174. [PMID: 29130054 PMCID: PMC5675577 DOI: 10.1080/23273798.2016.1245426] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Accepted: 09/05/2016] [Indexed: 05/31/2023]
Abstract
To examine whether more ecologically valid co-speech gesture stimuli elicit brain responses consistent with those found by studies that relied on scripted stimuli, we presented participants with spontaneously produced, meaningful co-speech gesture during fMRI scanning (n = 28). Speech presented with gesture (versus either presented alone) elicited heightened activity in bilateral posterior superior temporal, premotor, and inferior frontal regions. Within left temporal and premotor, but not inferior frontal regions, we identified small clusters with superadditive responses, suggesting that these discrete regions support both sensory and semantic integration. In contrast, surrounding areas and the inferior frontal gyrus may support either sensory or semantic integration. Reduced activation for speech with gesture in language-related regions indicates allocation of fewer neural resources when meaningful gestures accompany speech. Sign language experience did not affect co-speech gesture activation. Overall, our results indicate that scripted stimuli have minimal confounding influences; however, they may miss subtle superadditive effects.
Collapse
Affiliation(s)
- Jill Weisberg
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Amy Lynn Hubbard
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, 6495 Alvarado Rd., Suite 200, San Diego, CA 92120, USA, 619-594-8069,
| |
Collapse
|
32
|
Sources of mu activity and their functional connectivity in perceiving complexities in reciprocal social interactive motion: An exploratory study using the 'Namaste' task. Asian J Psychiatr 2016; 22:6-14. [PMID: 27520887 DOI: 10.1016/j.ajp.2016.03.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Revised: 03/22/2016] [Accepted: 03/28/2016] [Indexed: 12/22/2022]
Abstract
Cognitive processes underlying reciprocal social interactions are understood by the mechanism of embodiment, which is closely related to the mirror neuron system. Electroencephalographic (EEG) mu activity is a neural marker of the mirror neuron system. This study investigated the mu activity, localization of its sources and functional connectivity, which was induced while watching reciprocal social interactive motion across various degrees of complexity. Eighteen healthy participants underwent high-resolution EEG recording using 256-channels while they watched a specifically designed, culture specific, video task that showed two persons interacting socially using body gestures. Task complexity was determined by (1) whether there was an identical gestural response or a non-identical one; (2) whether the participant watched two persons interacting or was virtually involved in the interaction. Source localization and functional connectivity analysis was conducted for mu activity across various tasks. We also correlated mu activity and functional connectivity measures with serum BDNF. We found that spectral densities in various brain sources of mu activity and their increased functional connectivity distinguished identical and non-identical reciprocal expression observations, while mu suppression alone did not discriminate various degrees of complexities. These findings might have important implications in the understanding of mechanisms underlying mirror neuron dysfunction in various psychiatric disorders.
Collapse
|
33
|
García AM, Ibáñez A. A touch with words: Dynamic synergies between manual actions and language. Neurosci Biobehav Rev 2016; 68:59-95. [PMID: 27189784 DOI: 10.1016/j.neubiorev.2016.04.022] [Citation(s) in RCA: 75] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2015] [Revised: 04/14/2016] [Accepted: 04/27/2016] [Indexed: 11/16/2022]
Abstract
Manual actions are a hallmark of humanness. Their underlying neural circuitry gives rise to species-specific skills and interacts with language processes. In particular, multiple studies show that hand-related expressions - verbal units evoking manual activity - variously affect concurrent manual actions, yielding apparently controversial results (interference, facilitation, or null effects) in varied time windows. Through a systematic review of 108 experiments, we show that such effects are driven by several factors, such as the level of verbal processing, action complexity, and the time-lag between linguistic and motor processes. We reconcile key empirical patterns by introducing the Hand-Action-Network Dynamic Language Embodiment (HANDLE) model, an integrative framework based on neural coupling dynamics and predictive-coding principles. To conclude, we assess HANDLE against the backdrop of other action-cognition theories, illustrate its potential applications to understand high-level deficits in motor disorders, and discuss key challenges for further development. In sum, our work aligns with the 'pragmatic turn', moving away from passive and static representationalist perspectives to a more dynamic, enactive, and embodied conceptualization of cognitive processes.
Collapse
Affiliation(s)
- Adolfo M García
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Cognitive and Translational Neuroscience (INCyT), INECO Foundation, Favaloro University, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Faculty of Elementary and Special Education (FEEyE), National University of Cuyo (UNCuyo), Mendoza, Argentina
| | - Agustín Ibáñez
- Laboratory of Experimental Psychology and Neuroscience (LPEN), Institute of Cognitive and Translational Neuroscience (INCyT), INECO Foundation, Favaloro University, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Universidad Autónoma del Caribe, Barranquilla, Colombia; Center for Social and Cognitive Neuroscience (CSCN), School of Psychology, Adolfo Ibáñez University, Santiago de Chile, Chile; Centre of Excellence in Cognition and its Disorders, Australian Research Council (ACR), Sydney, Australia.
| |
Collapse
|
34
|
Bremner P, Leonards U. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech. Front Psychol 2016; 7:183. [PMID: 26925010 PMCID: PMC4756113 DOI: 10.3389/fpsyg.2016.00183] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Accepted: 01/31/2016] [Indexed: 12/04/2022] Open
Abstract
Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.
Collapse
Affiliation(s)
- Paul Bremner
- Bristol Robotics Laboratory, University of The West of England Bristol, UK
| | - Ute Leonards
- School of Experimental Psychology, University of Bristol Bristol, UK
| |
Collapse
|