1
|
Clough S, Padilla VG, Brown-Schmidt S, Duff MC. Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia 2023; 189:108665. [PMID: 37619936 PMCID: PMC10592037 DOI: 10.1016/j.neuropsychologia.2023.108665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/27/2023] [Accepted: 08/18/2023] [Indexed: 08/26/2023]
Abstract
PURPOSE Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying "He searched for a new recipe" while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays. METHODS 60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., "He searched for a new recipe"), a Gesture Match (e.g., "He searched for a new recipe online), or Other ("He looked for a new recipe"). We also examined whether participants produced representative gestures themselves when retelling these details. RESULTS Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story. CONCLUSION We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
Collapse
Affiliation(s)
- Sharice Clough
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States.
| | - Victoria-Grace Padilla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Vanderbilt University, United States
| | - Melissa C Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| |
Collapse
|
2
|
Tsai MJ. Dyadic Conversation between Mandarin-Chinese-Speaking Healthy Older Adults: From Analyses of Conversation Turns and Speaking Roles. Behav Sci (Basel) 2023; 13:bs13020134. [PMID: 36829363 PMCID: PMC9952709 DOI: 10.3390/bs13020134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/27/2023] [Accepted: 02/03/2023] [Indexed: 02/09/2023] Open
Abstract
Older adults' daily conversations with other older adults enable them to connect to their surrounding communities and improve their friendships. However, typical aging processes and fluctuations in family caring might cause conversation changes. The purpose of this study was to explore the quantitative contributions of conversation turns (CTs) and speaking roles (SRs) in Mandarin-Chinese-speaking conversation dyads between mutually familiar healthy older adults (HOAs). A total of 20 HOAs aged 65 or over were recruited. Each dyad conversed for ten minutes once a week for five weeks, five sessions per dyad, for a total of 50 sessions. The frequency and percentages of the coded CTs and SRs contributed by each HOA were individually tallied and calculated. Quantitatively symmetrical contributions of CTs and SRs occurred in Mandarin-Chinese-speaking conversation dyads between mutually familiar HOAs. Although typical aging processes might change conversations, both Mandarin-Chinese-speaking HOAs serve as active interlocutors to each other in taking CTs and SRs to co-construct their conversation processes and content in their dyadic conversation. Sufficient knowledge of conversation co-constructions might lead them to have more supportive environments to connect to surrounding communities and improve their friendships.
Collapse
Affiliation(s)
- Meng-Ju Tsai
- Department of Speech-Language Pathology and Audiology, Chung Shan Medical University, Taichung City 402, Taiwan;
- Speech and Language Therapy Room, Chung Shan Medical University Hospital, Taichung City 402, Taiwan
| |
Collapse
|
3
|
van Nispen K, Sekine K, van der Meulen I, Preisig BC. Gesture in the eye of the beholder: An eye-tracking study on factors determining the attention for gestures produced by people with aphasia. Neuropsychologia 2022; 174:108315. [DOI: 10.1016/j.neuropsychologia.2022.108315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 10/17/2022]
|
4
|
What is Functional Communication? A Theoretical Framework for Real-World Communication Applied to Aphasia Rehabilitation. Neuropsychol Rev 2022; 32:937-973. [PMID: 35076868 PMCID: PMC9630202 DOI: 10.1007/s11065-021-09531-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Aphasia is an impairment of language caused by acquired brain damage such as stroke or traumatic brain injury, that affects a person’s ability to communicate effectively. The aim of rehabilitation in aphasia is to improve everyday communication, improving an individual’s ability to function in their day-to-day life. For that reason, a thorough understanding of naturalistic communication and its underlying mechanisms is imperative. The field of aphasiology currently lacks an agreed, comprehensive, theoretically founded definition of communication. Instead, multiple disparate interpretations of functional communication are used. We argue that this makes it nearly impossible to validly and reliably assess a person’s communicative performance, to target this behaviour through therapy, and to measure improvements post-therapy. In this article we propose a structured, theoretical approach to defining the concept of functional communication. We argue for a view of communication as “situated language use”, borrowed from empirical psycholinguistic studies with non-brain damaged adults. This framework defines language use as: (1) interactive, (2) multimodal, and (3) contextual. Existing research on each component of the framework from non-brain damaged adults and people with aphasia is reviewed. The consequences of adopting this approach to assessment and therapy for aphasia rehabilitation are discussed. The aim of this article is to encourage a more systematic, comprehensive approach to the study and treatment of situated language use in aphasia.
Collapse
|
5
|
Macuch Silva V, Holler J, Ozyurek A, Roberts SG. Multimodality and the origin of a novel communication system in face-to-face interaction. ROYAL SOCIETY OPEN SCIENCE 2020; 7:182056. [PMID: 32218922 PMCID: PMC7029942 DOI: 10.1098/rsos.182056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 11/27/2019] [Indexed: 05/05/2023]
Abstract
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
Collapse
Affiliation(s)
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Asli Ozyurek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Seán G. Roberts
- Department of Archaeology and Anthropology (excd.lab), University of Bristol, Bristol, UK
| |
Collapse
|
6
|
Preisig BC, Eggenberger N, Cazzoli D, Nyffeler T, Gutbrod K, Annoni JM, Meichtry JR, Nef T, Müri RM. Multimodal Communication in Aphasia: Perception and Production of Co-speech Gestures During Face-to-Face Conversation. Front Hum Neurosci 2018; 12:200. [PMID: 29962942 PMCID: PMC6010555 DOI: 10.3389/fnhum.2018.00200] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 04/30/2018] [Indexed: 11/24/2022] Open
Abstract
The role of nonverbal communication in patients with post-stroke language impairment (aphasia) is not yet fully understood. This study investigated how aphasic patients perceive and produce co-speech gestures during face-to-face interaction, and whether distinct brain lesions would predict the frequency of spontaneous co-speech gesturing. For this purpose, we recorded samples of conversations in patients with aphasia and healthy participants. Gesture perception was assessed by means of a head-mounted eye-tracking system, and the produced co-speech gestures were coded according to a linguistic classification system. The main results are that meaning-laden gestures (e.g., iconic gestures representing object shapes) are more likely to attract visual attention than meaningless hand movements, and that patients with aphasia are more likely to fixate co-speech gestures overall than healthy participants. This implies that patients with aphasia may benefit from the multimodal information provided by co-speech gestures. On the level of co-speech gesture production, we found that patients with damage to the anterior part of the arcuate fasciculus showed a higher frequency of meaning-laden gestures. This area lies in close vicinity to the premotor cortex and is considered to be important for speech production. This may suggest that the use of meaning-laden gestures depends on the integrity of patients’ speech production abilities.
Collapse
Affiliation(s)
- Basil C Preisig
- Perception and Eye Movement Laboratory, Department of Neurology and Clinical Research, University of Bern Inselspital, Bern, Switzerland.,Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Noëmi Eggenberger
- Perception and Eye Movement Laboratory, Department of Neurology and Clinical Research, University of Bern Inselspital, Bern, Switzerland
| | - Dario Cazzoli
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Thomas Nyffeler
- Perception and Eye Movement Laboratory, Department of Neurology and Clinical Research, University of Bern Inselspital, Bern, Switzerland.,Center of Neurology and Neurorehabilitation, Luzerner Kantonsspital, Luzern, Switzerland
| | - Klemens Gutbrod
- University Neurorehabilitation Clinics, Department of Neurology, University of Bern Inselspital, Bern, Switzerland
| | - Jean-Marie Annoni
- Neurology Unit, Laboratory for Cognitive and Neurological Sciences, Department of Medicine, Faculty of Science, University of Fribourg, Fribourg, Switzerland
| | - Jurka R Meichtry
- Perception and Eye Movement Laboratory, Department of Neurology and Clinical Research, University of Bern Inselspital, Bern, Switzerland.,University Neurorehabilitation Clinics, Department of Neurology, University of Bern Inselspital, Bern, Switzerland
| | - Tobias Nef
- Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - René M Müri
- Perception and Eye Movement Laboratory, Department of Neurology and Clinical Research, University of Bern Inselspital, Bern, Switzerland.,Gerontechnology and Rehabilitation Group, ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.,University Neurorehabilitation Clinics, Department of Neurology, University of Bern Inselspital, Bern, Switzerland
| |
Collapse
|
7
|
Cocks N, Byrne S, Pritchard M, Morgan G, Dipper L. Integration of speech and gesture in aphasia. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2018; 53:584-591. [PMID: 29411476 DOI: 10.1111/1460-6984.12372] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 12/12/2017] [Accepted: 12/14/2017] [Indexed: 06/08/2023]
Abstract
BACKGROUND Information from speech and gesture is often integrated to comprehend a message. This integration process requires the appropriate allocation of cognitive resources to both the gesture and speech modalities. People with aphasia are likely to find integration of gesture and speech difficult. This is due to a reduction in cognitive resources, a difficulty with resource allocation or a combination of the two. Despite it being likely that people who have aphasia will have difficulty with integration, empirical evidence describing this difficulty is limited. Such a difficulty was found in a single case study by Cocks et al. in 2009, and is replicated here with a greater number of participants. AIMS To determine whether individuals with aphasia have difficulties understanding messages in which they have to integrate speech and gesture. METHODS & PROCEDURES Thirty-one participants with aphasia (PWA) and 30 control participants watched videos of an actor communicating a message in three different conditions: verbal only, gesture only, and verbal and gesture message combined. The message related to an action in which the name of the action (e.g., 'eat') was provided verbally and the manner of the action (e.g., hands in a position as though eating a burger) was provided gesturally. Participants then selected a picture that 'best matched' the message conveyed from a choice of four pictures which represented a gesture match only (G match), a verbal match only (V match), an integrated verbal-gesture match (Target) and an unrelated foil (UR). To determine the gain that participants obtained from integrating gesture and speech, a measure of multimodal gain (MMG) was calculated. OUTCOMES & RESULTS The PWA were less able to integrate gesture and speech than the control participants and had significantly lower MMG scores. When the PWA had difficulty integrating, they more frequently selected the verbal match. CONCLUSIONS & IMPLICATIONS The findings suggest that people with aphasia can have difficulty integrating speech and gesture in order to obtain meaning. Therefore, when encouraging communication partners to use gesture alongside language when communicating with people with aphasia, education regarding the types of gestures that would facilitate understanding is recommended.
Collapse
Affiliation(s)
- Naomi Cocks
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, WA, Australia
| | - Suzanne Byrne
- Health Service Executive, Dublin North East, Dublin, Ireland
| | - Madeleine Pritchard
- Division of Language and Communication Science, City, University of London, UK
| | - Gary Morgan
- Division of Language and Communication Science, City, University of London, UK
| | - Lucy Dipper
- Division of Language and Communication Science, City, University of London, UK
| |
Collapse
|
8
|
Wortman-Jutt S, Edwards DJ. Transcranial Direct Current Stimulation in Poststroke Aphasia Recovery. Stroke 2017; 48:820-826. [PMID: 28174328 DOI: 10.1161/strokeaha.116.015626] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Revised: 11/17/2016] [Accepted: 12/15/2016] [Indexed: 11/16/2022]
Affiliation(s)
- Susan Wortman-Jutt
- From the Burke Rehabilitation Hospital, White Plains, NY (S.W.-J.); Neuromodulation and Human Motor Control Laboratory, Burke Medical Research Institute, White Plains, NY (D.J.E.); Department of Neurology, Weill-Cornell Medical College, New York, NY (D.J.E.); School of Medical and Health Sciences, Edith Cowan University, Western Australia (D.J.E.); and Beth-Israel Deaconess Medical Center, Harvard Medical School, Boston, MA (D.J.E.).
| | - Dylan J Edwards
- From the Burke Rehabilitation Hospital, White Plains, NY (S.W.-J.); Neuromodulation and Human Motor Control Laboratory, Burke Medical Research Institute, White Plains, NY (D.J.E.); Department of Neurology, Weill-Cornell Medical College, New York, NY (D.J.E.); School of Medical and Health Sciences, Edith Cowan University, Western Australia (D.J.E.); and Beth-Israel Deaconess Medical Center, Harvard Medical School, Boston, MA (D.J.E.)
| |
Collapse
|
9
|
Preisig BC, Eggenberger N, Zito G, Vanbellingen T, Schumacher R, Hopfner S, Gutbrod K, Nyffeler T, Cazzoli D, Annoni JM, Bohlhalter S, Müri RM. Eye Gaze Behavior at Turn Transition: How Aphasic Patients Process Speakers' Turns during Video Observation. J Cogn Neurosci 2016; 28:1613-24. [PMID: 27243612 DOI: 10.1162/jocn_a_00983] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The human turn-taking system regulates the smooth and precise exchange of speaking turns during face-to-face interaction. Recent studies investigated the processing of ongoing turns during conversation by measuring the eye movements of noninvolved observers. The findings suggest that humans shift their gaze in anticipation to the next speaker before the start of the next turn. Moreover, there is evidence that the ability to timely detect turn transitions mainly relies on the lexico-syntactic content provided by the conversation. Consequently, patients with aphasia, who often experience deficits in both semantic and syntactic processing, might encounter difficulties to detect and timely shift their gaze at turn transitions. To test this assumption, we presented video vignettes of natural conversations to aphasic patients and healthy controls, while their eye movements were measured. The frequency and latency of event-related gaze shifts, with respect to the end of the current turn in the videos, were compared between the two groups. Our results suggest that, compared with healthy controls, aphasic patients have a reduced probability to shift their gaze at turn transitions but do not show significantly increased gaze shift latencies. In healthy controls, but not in aphasic patients, the probability to shift the gaze at turn transition was increased when the video content of the current turn had a higher lexico-syntactic complexity. Furthermore, the results from voxel-based lesion symptom mapping indicate that the association between lexico-syntactic complexity and gaze shift latency in aphasic patients is predicted by brain lesions located in the posterior branch of the left arcuate fasciculus. Higher lexico-syntactic processing demands seem to lead to a reduced gaze shift probability in aphasic patients. This finding may represent missed opportunities for patients to place their contributions during everyday conversation.
Collapse
Affiliation(s)
| | | | | | | | | | - Simone Hopfner
- University Hospital Inselspital Bern.,University of Bern
| | | | | | | | | | | | - René M Müri
- University Hospital Inselspital Bern.,University of Bern
| |
Collapse
|
10
|
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study. PLoS One 2016; 11:e0146583. [PMID: 26735917 PMCID: PMC4703302 DOI: 10.1371/journal.pone.0146583] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Accepted: 12/18/2015] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. METHOD Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. RESULTS In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. CONCLUSION Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Collapse
|