1
|
Holmer E, Rönnberg J, Asutay E, Tirado C, Ekberg M. Facial mimicry interference reduces working memory accuracy for facial emotion expressions. PLoS One 2024; 19:e0306113. [PMID: 38924006 PMCID: PMC11207140 DOI: 10.1371/journal.pone.0306113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging.
Collapse
Affiliation(s)
- Emil Holmer
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Erkin Asutay
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- JEDI Lab, Linköping University, Linköping, Sweden
| | - Carlos Tirado
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Mattias Ekberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
2
|
Schvartz-Leyzac KC, Giordani B, Pfingst BE. Association of Aging and Cognition With Complex Speech Understanding in Cochlear-Implanted Adults: Use of a Modified National Institutes of Health (NIH) Toolbox Cognitive Assessment. JAMA Otolaryngol Head Neck Surg 2023; 149:239-246. [PMID: 36701145 PMCID: PMC9880868 DOI: 10.1001/jamaoto.2022.4806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 12/01/2022] [Indexed: 01/27/2023]
Abstract
Importance The association between cognitive function and outcomes in cochlear implant (CI) users is not completely understood, partly because some cognitive tests are confounded by auditory status. It is important to determine appropriate cognitive tests to use in a cohort of CI recipients. Objective To provide proof-of-concept for using an adapted version of the National Institutes of Health (NIH) Toolbox Cognition Battery in a cohort of patients with CIs and to explore how hearing in noise with a CI is affected by cognitive status using the adapted test. Design, Setting, and Participants In this prognostic study, participants listened to sentences presented in a speech-shaped background noise. Cognitive tests consisted of 7 subtests of the NIH Toolbox Cognition Battery that were adapted for hearing impaired individuals by including written instructions and visual stimuli. Participants were prospectively recruited from and evaluated at a tertiary medical center. All participants had at least 6 months' experience with their CI. Main Outcomes and Measures The main outcomes were performance on the adapted cognitive test and a speech recognition in noise task. Results Participants were 20 adult perilingually or postlingually deafened CI users (50% male participants; median [range] age, 66 [26-80] years old). Performance on a sentence recognition in noise task was negatively associated with the chronological age of the listener (R2 = 0.29; β = 0.16; standard error, SE = 0.06; t = 2.63; 95% confidence interval, 0.03-0.27). Testing using the adapted version of the NIH Toolbox Cognition Battery revealed that a test of processing speed was also associated with performance, using a standardized score that accounted for contributions of other demographic factors (R2 = 0.28; 95% confidence interval, -0.42 to -0.05). Conclusions and Relevance In this prognostic study, older CI users showed poorer performance on a sentence-in-noise test compared with younger users. This poorer performance was correlated with a cognitive deficit in processing speed when cognitive function was assessed using a test battery adapted for participants with hearing loss. These results provide initial proof-of-concept results for using a standardized and adapted cognitive test battery in CI recipients.
Collapse
Affiliation(s)
- Kara C. Schvartz-Leyzac
- Kresge Hearing Research Institute, Department of Otolaryngology, University of Michigan Health Systems, Ann Arbor
- Hearing Rehabilitation Center, Department of Otolaryngology, University of Michigan Health Systems, Ann Arbor
- Medical University of South Carolina, Charleston
| | - Bruno Giordani
- Department of Psychiatry & Michigan Alzheimer’s Disease Center, University of Michigan Health Systems, Ann Arbor
| | - Bryan E. Pfingst
- Kresge Hearing Research Institute, Department of Otolaryngology, University of Michigan Health Systems, Ann Arbor
| |
Collapse
|
3
|
The linguistic constraints of precision of verbal working memory. Mem Cognit 2022; 50:1464-1485. [DOI: 10.3758/s13421-022-01283-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/30/2022] [Indexed: 11/08/2022]
|
4
|
Ashori M. Working Memory-Based Cognitive Rehabilitation: Spoken Language of Deaf and Hard-of-Hearing Children. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 27:234-244. [PMID: 35543013 DOI: 10.1093/deafed/enac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
This research examined the effect of the Working Memory-based Cognitive Rehabilitation (WMCR) intervention on the spoken language development of deaf and hard-of-hearing (DHH) children. In this clinical trial study, 28 DHH children aged between 5 and 6 years were selected by random sampling method. The participants were randomly assigned to experimental and control groups. The experimental group participated in the WMCR intervention involving 11 sessions. All participants were assessed pre-and postintervention. Data were collected by the Newsha Development Scale and analyzed through MANCOVA. The results revealed a significant difference between the scores of the receptive and expressive language of the experimental group that were exposed to the WMCR intervention compared with the control group. The receptive and expressive language skills of the experimental group indicated a significant improvement after the intervention. Therefore, the WMCR intervention is an effective method that affects the spoken language skills of DHH children. These findings have critical implications for teachers, parents, and therapists in supporting DHH young children to develop their language skills.
Collapse
Affiliation(s)
- Mohammad Ashori
- Associate Professor, Department of Psychology and Education of People with Special Needs, Faculty of Education and Psychology, University of Isfahan, Isfahan, Iran
| |
Collapse
|
5
|
Riedl L, Nagels A, Sammer G, Choudhury M, Nonnenmann A, Sütterlin A, Feise C, Haslach M, Bitsch F, Straube B. Multimodal speech-gesture training in patients with schizophrenia spectrum disorder: Effects on quality of life and neural processing. Schizophr Res 2022; 246:112-125. [PMID: 35759877 DOI: 10.1016/j.schres.2022.06.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 04/25/2022] [Accepted: 06/11/2022] [Indexed: 12/30/2022]
Abstract
Dysfunctional social communication is one of the most stable characteristics in patients with schizophrenia spectrum disorder (SSD) that severely affects quality of life. Interpreting abstract speech and integrating nonverbal information is particularly affected. Considering the difficulty to treat communication dysfunctions with usual intervention, we investigated the possibility to apply a multimodal speech-gesture (MSG) training. In the MSG training, we offered 8 sessions (60 min each) including perceptive and expressive tasks as well as meta-learning elements and transfer exercises to 29 patients with SSD. In a within-group crossover design, patients were randomized to a TAU-first (treatment as usual first, then MSG training) group (N = 20) or a MSG-first (MSG training first, then TAU only) group (N = 9), and were compared to healthy controls (N = 17). Outcomes were quality of life and related changes in the neural processing of abstract speech-gesture information, which were measured pre-post training through standardized psychological questionnaires and functional Magnetic Resonance Imaging, respectively. Pre-training, patients showed reduced quality of life as compared to controls but improved significantly during the training. Strikingly, this improvement was correlated with neural activation changes in the middle temporal gyrus for the processing of abstract multimodal content. Improvement during training, self-report measures and ratings of relatives confirmed the MSG-related changes. Together, we provide first promising results of a novel multimodal speech-gesture training for patients with schizophrenia. We could link training induced changes in speech-gesture processing to changes in quality of life, demonstrating the relevance of intact communication skills and gesture processing for well-being.
Collapse
Affiliation(s)
- Lydia Riedl
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), Philipps-University Marburg and Justus Liebig University, Giessen, Germany.
| | - Arne Nagels
- Department of English and Linguistics, Johannes-Gutenberg-University, Mainz, Germany
| | - Gebhard Sammer
- Department of Psychiatry and Psychotherapy, Justus-Liebig-University, Giessen, Germany
| | - Momoko Choudhury
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany
| | - Annika Nonnenmann
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany
| | - Anne Sütterlin
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany
| | - Chiara Feise
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany
| | - Maxi Haslach
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany
| | - Florian Bitsch
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany
| | - Benjamin Straube
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University, Marburg, Germany; Center for Mind, Brain and Behavior (CMBB), Philipps-University Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
6
|
Heled E, Ohayon M. Working Memory for Faces among Individuals with Congenital Deafness. J Am Acad Audiol 2022; 33:342-348. [PMID: 36446592 DOI: 10.1055/s-0042-1754369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
BACKGROUND Studies examining face processing among individuals with congenital deafness show inconsistent results that are often accounted for by sign language skill. However, working memory for faces as an aspect of face processing has not yet been examined in congenital deafness. PURPOSE To explore working memory for faces among individuals with congenital deafness who are skilled in sign language. RESEARCH DESIGN A quasi-experimental study of individuals with congenital deafness and a control group. STUDY SAMPLE Sixteen individuals with congenital deafness who are skilled in sign language and 18 participants with intact hearing, matched for age, and education. INTERVENTION The participants performed two conditions of the N-back test in ascending difficulty (i.e., 1-back and 2-back). DATA COLLECTION AND ANALYSIS Levene's and Shapiro-Wilk tests were used to assess group homoscedasticity and normality, respectively. A two-way repeated measures analysis of variance was applied to compare the groups in response time and accuracy of the N-back test, as well as Pearson correlation between response time and accuracy, and sign language skill duration. RESULTS The congenital deafness group performed better than controls, as was found in the response time but not in the accuracy variables. However, an interaction effect showed that this pattern was significant for the 1-back but not for the 2-back condition in the response time but not the accuracy. Further, there was a marginal effect in response time but a significant one in accuracy showing the 2-back was performed worse than the 1-back. No significant correlation was found between response time and accuracy, and sign language skill duration. CONCLUSION Face processing advantage associated with congenital deafness is dependent on cognitive load, but sign language duration does not affect this trend. In addition, response time and accuracy are not equally sensitive to performance differences in the N-back test.
Collapse
Affiliation(s)
- Eyal Heled
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Neurological Rehabilitation, Sheba Medical Center, Ramat-Gan, Israel
| | - Maayon Ohayon
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
7
|
Radošević T, Malaia EA, Milković M. Predictive Processing in Sign Languages: A Systematic Review. Front Psychol 2022; 13:805792. [PMID: 35496220 PMCID: PMC9047358 DOI: 10.3389/fpsyg.2022.805792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/03/2022] [Indexed: 01/12/2023] Open
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
Affiliation(s)
- Tomislav Radošević
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| | - Evie A Malaia
- Laboratory for Neuroscience of Dynamic Cognition, Department of Communicative Disorders, College of Arts and Sciences, University of Alabama, Tuscaloosa, AL, United States
| | - Marina Milković
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
8
|
Heled E, Ohayon M, Oshri O. Working memory in intact modalities among individuals with sensory deprivation. Heliyon 2022; 8:e09558. [PMID: 35706957 PMCID: PMC9189883 DOI: 10.1016/j.heliyon.2022.e09558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 04/30/2022] [Accepted: 05/25/2022] [Indexed: 10/25/2022] Open
|
9
|
Andin J, Holmer E, Schönström K, Rudner M. Working Memory for Signs with Poor Visual Resolution: fMRI Evidence of Reorganization of Auditory Cortex in Deaf Signers. Cereb Cortex 2021; 31:3165-3176. [PMID: 33625498 PMCID: PMC8196262 DOI: 10.1093/cercor/bhaa400] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 11/16/2022] Open
Abstract
Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual–verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.
Collapse
Affiliation(s)
- Josefine Andin
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden
| | - Emil Holmer
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden.,Center for Medical Image Science and Visualization, Linköping, Sweden
| | | | - Mary Rudner
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden.,Center for Medical Image Science and Visualization, Linköping, Sweden
| |
Collapse
|
10
|
Momsen J, Gordon J, Wu YC, Coulson S. Event related spectral perturbations of gesture congruity: Visuospatial resources are recruited for multimodal discourse comprehension. BRAIN AND LANGUAGE 2021; 216:104916. [PMID: 33652372 PMCID: PMC11296609 DOI: 10.1016/j.bandl.2021.104916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 11/30/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Here we examine the role of visuospatial working memory (WM) during the comprehension of multimodal discourse with co-speech iconic gestures. EEG was recorded as healthy adults encoded either a sequence of one (low load) or four (high load) dot locations on a grid and rehearsed them until a free recall response was collected later in the trial. During the rehearsal period of the WM task, participants observed videos of a speaker describing objects in which half of the trials included semantically related co-speech gestures (congruent), and the other half included semantically unrelated gestures (incongruent). Discourse processing was indexed by oscillatory EEG activity in the alpha and beta bands during the videos. Across all participants, effects of speech and gesture incongruity were more evident in low load trials than in high load trials. Effects were also modulated by individual differences in visuospatial WM capacity. These data suggest visuospatial WM resources are recruited in the comprehension of multimodal discourse.
Collapse
Affiliation(s)
- Jacob Momsen
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States
| | - Jared Gordon
- Cognitive Science Department, UC San Diego, United States
| | - Ying Choon Wu
- Swartz Center for Computational Neuroscience, UC San Diego, United States
| | - Seana Coulson
- Joint Doctoral Program Language and Communicative Disorders, San Diego State University and UC San Diego, United States; Cognitive Science Department, UC San Diego, United States.
| |
Collapse
|
11
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
12
|
Riedl L, Nagels A, Sammer G, Straube B. A Multimodal Speech-Gesture Training Intervention for Patients With Schizophrenia and Its Neural Underpinnings - the Study Protocol of a Randomized Controlled Pilot Trial. Front Psychiatry 2020; 11:110. [PMID: 32210849 PMCID: PMC7068208 DOI: 10.3389/fpsyt.2020.00110] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 02/07/2020] [Indexed: 01/02/2023] Open
Abstract
UNLABELLED Dysfunctional social communication is one of the most stable characteristics in patients with schizophrenia that also affects quality of life. Interpreting abstract speech and integrating nonverbal modalities is particularly affected. Considering the impact of communication on social life but failure to treat communication dysfunctions with usual treatment, we will investigate the possibility to improve verbal and non-verbal communication in schizophrenia by applying a multimodal speech-gesture training (MSG training). Here we describe the newly developed MSG training program and the study design for the first clinical investigation. The intervention contains perceptive rating (match/mismatch of sentence and gesture) and memory tasks (n-back tasks), imitation and productive tasks (e.g., SG fluency-similar to verbal fluency where words are accompanied by gesture). In addition, we offer information about gesture as meta-learning element as well as homework for reasons of transfer to everyday life as part of every session. In the MSG training intervention, we offer eight sessions (60 min each) of training. The first pilot study is currently conducted as a single-center, randomized controlled trial of speech-gesture intervention versus wait-list control with a follow-up. Outcomes are measured through pre-post-fMRI and standardized psychological questionnaires comparing two subject groups (30 patients with schizophrenia and 30 healthy controls). Patients and healthy controls are randomized in two intervention groups (with 20 being in the wait-training group and 10 in the training-follow-up group). With our study design we will be able to demonstrate the beneficial effect of the MSG training intervention on behavioral and neural levels. CLINICAL TRIAL REGISTRATION DRKS.de, identifier DRKS00015118.
Collapse
Affiliation(s)
- Lydia Riedl
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| | - Arne Nagels
- Department of English and Linguistics, Johannes-Gutenberg-University, Mainz, Germany
| | - Gebhard Sammer
- Department of Psychiatry and Psychotherapy, Justus-Liebig-University, Gießen, Germany
| | - Benjamin Straube
- Translational Neuroimaging Lab, Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
13
|
Rudner M, Danielsson H, Lyxell B, Lunner T, Rönnberg J. Visual Rhyme Judgment in Adults With Mild-to-Severe Hearing Loss. Front Psychol 2019; 10:1149. [PMID: 31191388 PMCID: PMC6546845 DOI: 10.3389/fpsyg.2019.01149] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 05/01/2019] [Indexed: 12/23/2022] Open
Abstract
Adults with poorer peripheral hearing have slower phonological processing speed measured using visual rhyme tasks, and it has been suggested that this is due to fading of phonological representations stored in long-term memory. Representations of both vowels and consonants are likely to be important for determining whether or not two printed words rhyme. However, it is not known whether the relation between phonological processing speed and hearing loss is specific to the lower frequency ranges which characterize vowels or higher frequency ranges that characterize consonants. We tested the visual rhyme ability of 212 adults with hearing loss. As in previous studies, we found that rhyme judgments were slower and less accurate when there was a mismatch between phonological and orthographic information. A substantial portion of the variance in the speed of making correct rhyme judgment decisions was explained by lexical access speed. Reading span, a measure of working memory, explained further variance in match but not mismatch conditions, but no additional variance was explained by auditory variables. This pattern of findings suggests possible reliance on a lexico-semantic word-matching strategy for solving the rhyme judgment task. Future work should investigate the relation between adoption of a lexico-semantic strategy during phonological processing tasks and hearing aid outcome.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Henrik Danielsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lyxell
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | - Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
14
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|