1
|
Nirme J, Gulz A, Haake M, Gullberg M. Early or synchronized gestures facilitate speech recall-a study based on motion capture data. Front Psychol 2024; 15:1345906. [PMID: 38596333 PMCID: PMC11002957 DOI: 10.3389/fpsyg.2024.1345906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/07/2024] [Indexed: 04/11/2024] Open
Abstract
Introduction Temporal co-ordination between speech and gestures has been thoroughly studied in natural production. In most cases gesture strokes precede or coincide with the stressed syllable in words that they are semantically associated with. Methods To understand whether processing of speech and gestures is attuned to such temporal coordination, we investigated the effect of delaying, preposing or eliminating individual gestures on the memory for words in an experimental study in which 83 participants watched video sequences of naturalistic 3D-animated speakers generated based on motion capture data. A target word in the sequence appeared (a) with a gesture presented in its original position synchronized with speech, (b) temporally shifted 500 ms before or (c) after the original position, or (d) with the gesture eliminated. Participants were asked to retell the videos in a free recall task. The strength of recall was operationalized as the inclusion of the target word in the free recall. Results Both eliminated and delayed gesture strokes resulted in reduced recall rates compared to synchronized strokes, whereas there was no difference between advanced (preposed) and synchronized strokes. An item-level analysis also showed that the greater the interval between the onsets of delayed strokes and stressed syllables in target words, the greater the negative effect was on recall. Discussion These results indicate that speech-gesture synchrony affects memory for speech, and that temporal patterns that are common in production lead to the best recall. Importantly, the study also showcases a procedure for using motion capture-based 3D-animated speakers to create an experimental paradigm for the study of speech-gesture comprehension.
Collapse
Affiliation(s)
- Jens Nirme
- Lund University Cognitive Science, Lund, Sweden
| | - Agneta Gulz
- Lund University Cognitive Science, Lund, Sweden
| | | | - Marianne Gullberg
- Centre for Languages and Literature and Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
2
|
Clough S, Padilla VG, Brown-Schmidt S, Duff MC. Intact speech-gesture integration in narrative recall by adults with moderate-severe traumatic brain injury. Neuropsychologia 2023; 189:108665. [PMID: 37619936 PMCID: PMC10592037 DOI: 10.1016/j.neuropsychologia.2023.108665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/27/2023] [Accepted: 08/18/2023] [Indexed: 08/26/2023]
Abstract
PURPOSE Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying "He searched for a new recipe" while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays. METHODS 60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., "He searched for a new recipe"), a Gesture Match (e.g., "He searched for a new recipe online), or Other ("He looked for a new recipe"). We also examined whether participants produced representative gestures themselves when retelling these details. RESULTS Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story. CONCLUSION We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
Collapse
Affiliation(s)
- Sharice Clough
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States.
| | - Victoria-Grace Padilla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Vanderbilt University, United States
| | - Melissa C Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| |
Collapse
|
3
|
Asalıoğlu EN, Göksun T. The role of hand gestures in emotion communication: Do type and size of gestures matter? PSYCHOLOGICAL RESEARCH 2023; 87:1880-1898. [PMID: 36436110 DOI: 10.1007/s00426-022-01774-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/17/2022] [Indexed: 11/28/2022]
Abstract
We communicate emotions in a multimodal way, yet non-verbal emotion communication is a relatively understudied area of research. In three experiments, we investigated the role of gesture characteristics (e.g., type, size in space) on individuals' processing of emotional content. In Experiment 1, participants were asked to rate the emotional intensity of emotional narratives from the videoclips either with iconic or beat gestures. Participants in the iconic gesture condition rated the emotional intensity higher than participants in the beat gesture condition. In Experiment 2, the size of gestures and its interaction with gesture type were investigated in a within-subjects design. Participants again rated the emotional intensity of emotional narratives from the videoclips. Although individuals overall rated narrow gestures more emotionally intense than wider gestures, no effects of gesture type, or gesture size and type interaction were found. Experiment 3 was conducted to check whether findings of Experiment 2 were due to viewing gestures in all videoclips. We compared the gesture and no gesture (i.e., speech only) conditions and showed that there was not a difference between them on emotional ratings. However, we could not replicate the findings related to gesture size of Experiment 2. Overall, these findings indicate the importance of examining gesture's role in emotional contexts and that different gesture characteristics such as size of gestures can be considered in nonverbal communication.
Collapse
Affiliation(s)
- Esma Nur Asalıoğlu
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Tilbe Göksun
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey.
| |
Collapse
|
4
|
Arbona E, Seeber KG, Gullberg M. The role of manual gestures in second language comprehension: a simultaneous interpreting experiment. Front Psychol 2023; 14:1188628. [PMID: 37441333 PMCID: PMC10333536 DOI: 10.3389/fpsyg.2023.1188628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/08/2023] [Indexed: 07/15/2023] Open
Abstract
Manual gestures and speech form a single integrated system during native language comprehension. However, it remains unclear whether this hold for second language (L2) comprehension, more specifically for simultaneous interpreting (SI), which involves comprehension in one language and simultaneous production in another. In a combined mismatch and priming paradigm, we presented Swedish speakers fluent in L2 English with multimodal stimuli in which speech was congruent or incongruent with a gesture. A picture prime was displayed before the stimuli. Participants had to decide whether the video was related to the prime, focusing either on the auditory or the visual information. Participants performed the task either during passive viewing or during SI into their L1 Swedish (order counterbalanced). Incongruent stimuli yielded longer reaction times than congruent stimuli, during both viewing and interpreting. Visual and audio targets were processed equally easily in both activities. However, in both activities incongruent speech was more disruptive for gesture processing than incongruent gesture was for speech processing. Thus, the data only partly supports the expected mutual and obligatory interaction of gesture and speech in L2 comprehension. Interestingly, there were no differences between activities suggesting that the language comprehension component in SI shares features with other (L2) comprehension tasks.
Collapse
Affiliation(s)
- Eléonore Arbona
- Faculty of Translation and Interpreting, University of Geneva, Geneva, Switzerland
| | - Kilian G. Seeber
- Faculty of Translation and Interpreting, University of Geneva, Geneva, Switzerland
| | - Marianne Gullberg
- Centre for Languages and Literature and Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
5
|
Gestures and pauses to help thought: hands, voice, and silence in the tourist guide's speech. Cogn Process 2023; 24:25-41. [PMID: 36495353 DOI: 10.1007/s10339-022-01116-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 11/23/2022] [Indexed: 12/14/2022]
Abstract
In the body of research on the relationship between gesture and speech, some models propose they form an integrated system while others attribute gestures a compensatory role in communication. This study addresses the gesture-speech relationship by taking disfluency phenomena as a case study. Since it is part of a project aimed at designing virtual agents to be employed in museums, an analysis was performed on the communicative behavior of tourist guides. Results reveal that gesturing is more frequent during speech than pauses. Moreover, when comparing the types of gestures and types of pauses they co-occur with, non-communicative gestures (idles and manipulators) turn out to be more frequent than communicatively-meaningful gestures, which instead more often co-occur with speech. We discuss these findings as relevant for a theoretical model viewing speech and gesture as an integrated system.
Collapse
|
6
|
Clough S, Hilverman C, Brown-Schmidt S, Duff MC. Evidence of Audience Design in Amnesia: Adaptation in Gesture but Not Speech. Brain Sci 2022; 12:1082. [PMID: 36009145 PMCID: PMC9405987 DOI: 10.3390/brainsci12081082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/10/2022] [Accepted: 08/10/2022] [Indexed: 11/20/2022] Open
Abstract
Speakers design communication for their audience, providing more information in both speech and gesture when their listener is naïve to the topic. We test whether the hippocampal declarative memory system contributes to multimodal audience design. The hippocampus, while traditionally linked to episodic and relational memory, has also been linked to the ability to imagine the mental states of others and use language flexibly. We examined the speech and gesture use of four patients with hippocampal amnesia when describing how to complete everyday tasks (e.g., how to tie a shoe) to an imagined child listener and an adult listener. Although patients with amnesia did not increase their total number of words and instructional steps for the child listener, they did produce representational gestures at significantly higher rates for the imagined child compared to the adult listener. They also gestured at similar frequencies to neurotypical peers, suggesting that hand gesture can be a meaningful communicative resource, even in the case of severe declarative memory impairment. We discuss the contributions of multiple memory systems to multimodal audience design and the potential of gesture to act as a window into the social cognitive processes of individuals with neurologic disorders.
Collapse
Affiliation(s)
- Sharice Clough
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Caitlin Hilverman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
- Qntfy Corporation, Arlington, VA 22209, USA
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN 37235, USA
| | - Melissa C. Duff
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| |
Collapse
|
7
|
van Nispen K, Sekine K, van der Meulen I, Preisig BC. Gesture in the eye of the beholder: An eye-tracking study on factors determining the attention for gestures produced by people with aphasia. Neuropsychologia 2022; 174:108315. [DOI: 10.1016/j.neuropsychologia.2022.108315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 06/28/2022] [Accepted: 06/30/2022] [Indexed: 10/17/2022]
|
8
|
Krason A, Fenton R, Varley R, Vigliocco G. The role of iconic gestures and mouth movements in face-to-face communication. Psychon Bull Rev 2022; 29:600-612. [PMID: 34671936 PMCID: PMC9038814 DOI: 10.3758/s13423-021-02009-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/06/2021] [Indexed: 11/16/2022]
Abstract
Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures. For each word we also measured the informativeness of the mouth movements from a separate lipreading task. We manipulated whether gestures were congruent or incongruent with the speech, and whether the words were audible or noise vocoded. The task was to decide whether the speech from the video matched a previously seen picture. We found that congruent iconic gestures aided word recognition, especially in the noise-vocoded condition, and the effect was larger (in terms of reaction times) for more informative gestures. Moreover, more informative mouth movements facilitated performance in challenging listening conditions when the speech was accompanied by gestures (either congruent or incongruent) suggesting an enhancement when both cues are present relative to just one. We also observed (a trend) that more informative mouth movements speeded up word recognition across clarity conditions, but only when the gestures were absent. We conclude that listeners use and dynamically weight the informativeness of gestures and mouth movements available during face-to-face communication.
Collapse
Affiliation(s)
- Anna Krason
- Division of Psychology and Language Science, University College London, 26 Bedford Way, London, WC1H 0AP, UK.
| | - Rebecca Fenton
- Division of Psychology and Language Science, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| | - Rosemary Varley
- Division of Psychology and Language Science, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| | - Gabriella Vigliocco
- Division of Psychology and Language Science, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| |
Collapse
|
9
|
Kandana Arachchige KG, Blekic W, Simoes Loureiro I, Lefebvre L. Covert Attention to Gestures Is Sufficient for Information Uptake. Front Psychol 2021; 12:776867. [PMID: 34917002 PMCID: PMC8669744 DOI: 10.3389/fpsyg.2021.776867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 10/29/2021] [Indexed: 12/02/2022] Open
Abstract
Numerous studies have explored the benefit of iconic gestures in speech comprehension. However, only few studies have investigated how visual attention was allocated to these gestures in the context of clear versus degraded speech and the way information is extracted for enhancing comprehension. This study aimed to explore the effect of iconic gestures on comprehension and whether fixating the gesture is required for information extraction. Four types of gestures (i.e., semantically and syntactically incongruent iconic gestures, meaningless configurations, and congruent iconic gestures) were presented in a sentence context in three different listening conditions (i.e., clear, partly degraded or fully degraded speech). Using eye tracking technology, participants’ gaze was recorded, while they watched video clips after which they were invited to answer simple comprehension questions. Results first showed that different types of gestures differently attract attention and that the more speech was degraded, the less participants would pay attention to gestures. Furthermore, semantically incongruent gestures appeared to particularly impair comprehension although not being fixated while congruent gestures appeared to improve comprehension despite also not being fixated. These results suggest that covert attention is sufficient to convey information that will be processed by the listener.
Collapse
Affiliation(s)
| | - Wivine Blekic
- Department of Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| | | | - Laurent Lefebvre
- Department of Cognitive Psychology and Neuropsychology, University of Mons, Mons, Belgium
| |
Collapse
|
10
|
Drijvers L, Jensen O, Spaak E. Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information. Hum Brain Mapp 2021; 42:1138-1152. [PMID: 33206441 PMCID: PMC7856646 DOI: 10.1002/hbm.25282] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 10/15/2020] [Accepted: 10/21/2020] [Indexed: 12/21/2022] Open
Abstract
During communication in real-life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady-state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower-order auditory factors (clear/degraded speech) and higher-order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual - fauditory = 7 Hz), specifically when lower-order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech-gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower-order audiovisual integration and demonstrates that speech-gesture information interacts in higher-order language areas. Furthermore, we provide a proof-of-principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
Collapse
Affiliation(s)
- Linda Drijvers
- Donders Institute for Brain, Cognition, and Behaviour, Centre for Cognition, Montessorilaan 3Radboud UniversityNijmegenHRThe Netherlands
- Max Planck Institute for PsycholinguisticsNijmegenXDThe Netherlands
| | - Ole Jensen
- School of Psychology, Centre for Human Brain HealthUniversity of BirminghamBirminghamUnited Kingdom
| | - Eelke Spaak
- Donders Institute for Brain, Cognition, and Behaviour, Centre for Cognitive Neuroimaging, Kapittelweg 29Radboud UniversityNijmegenENThe Netherlands
| |
Collapse
|
11
|
Hinnell J, Parrill F. Gesture Influences Resolution of Ambiguous Statements of Neutral and Moral Preferences. Front Psychol 2020; 11:587129. [PMID: 33362652 PMCID: PMC7758198 DOI: 10.3389/fpsyg.2020.587129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/16/2020] [Indexed: 11/13/2022] Open
Abstract
When faced with an ambiguous pronoun, comprehenders use both multimodal cues (e.g., gestures) and linguistic cues to identify the antecedent. While research has shown that gestures facilitate language comprehension, improve reference tracking, and influence the interpretation of ambiguous pronouns, literature on reference resolution suggests that a wide set of linguistic constraints influences the successful resolution of ambiguous pronouns and that linguistic cues are more powerful than some multimodal cues. To address the outstanding question of the importance of gesture as a cue in reference resolution relative to cues in the speech signal, we have previously investigated the comprehension of contrastive gestures that indexed abstract referents – in this case expressions of personal preference – and found that such gestures did facilitate the resolution of ambiguous statements of preference. In this study, we extend this work to investigate whether the effect of gesture on resolution is diminished when the gesture indexes a statement that is less likely to be interpreted as the correct referent. Participants watched videos in which a speaker contrasted two ideas that were either neutral (e.g., whether to take the train to a ballgame or drive) or moral (e.g., human cloning is (un)acceptable). A gesture to the left or right side co-occurred with speech expressing each position. In gesture-disambiguating trials, an ambiguous phrase (e.g., I agree with that, where that is ambiguous) was accompanied by a gesture to one side or the other. In gesture non-disambiguating trials, no third gesture occurred with the ambiguous phrase. Participants were more likely to choose the idea accompanied by gesture as the stimulus speaker’s preference. We found no effect of scenario type. Regardless of whether the linguistic cue expressed a view that was morally charged or neutral, observers used gesture to understand the speaker’s opinion. This finding contributes to our understanding of the strength and range of cues, both linguistic and multimodal, that listeners use to resolve ambiguous references.
Collapse
Affiliation(s)
- Jennifer Hinnell
- Department of English Language and Literatures, The University of British Columbia, Vancouver, BC, Canada
| | - Fey Parrill
- Department of Cognitive Science, Case Western Reserve University, Cleveland, OH, United States
| |
Collapse
|
12
|
Özer D, Göksun T. Gesture Use and Processing: A Review on Individual Differences in Cognitive Resources. Front Psychol 2020; 11:573555. [PMID: 33250817 PMCID: PMC7674851 DOI: 10.3389/fpsyg.2020.573555] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 09/29/2020] [Indexed: 01/02/2023] Open
Abstract
Speakers use spontaneous hand gestures as they speak and think. These gestures serve many functions for speakers who produce them as well as for listeners who observe them. To date, studies in the gesture literature mostly focused on group-comparisons or the external sources of variation to examine when people use, process, and benefit from using and observing gestures. However, there are also internal sources of variation in gesture use and processing. People differ in how frequently they use gestures, how salient their gestures are, for what purposes they produce gestures, and how much they benefit from using and seeing gestures during comprehension and learning depending on their cognitive dispositions. This review addresses how individual differences in different cognitive skills relate to how people employ gestures in production and comprehension across different ages (from infancy through adulthood to healthy aging) from a functionalist perspective. We conclude that speakers and listeners can use gestures as a compensation tool during communication and thinking that interacts with individuals' cognitive dispositions.
Collapse
Affiliation(s)
- Demet Özer
- Department of Psychology, Koç University, Istanbul, Turkey
| | | |
Collapse
|
13
|
Sparrow K, Lind C, van Steenbrugge W. Gesture, communication, and adult acquired hearing loss. JOURNAL OF COMMUNICATION DISORDERS 2020; 87:106030. [PMID: 32707420 DOI: 10.1016/j.jcomdis.2020.106030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 06/18/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Nonverbal communication, specifically hand and arm movements (commonly known as gesture), has long been recognized and explored as a significant element in human interaction as well as potential compensatory behavior for individuals with communication difficulties. The use of gesture as a compensatory communication method in expressive and receptive human communication disorders has been the subject of much investigation. Yet within the context of adult acquired hearing loss, gesture has received limited research attention and much remains unknown about patterns of nonverbal behaviors in conversations in which hearing loss is a factor. This paper presents key elements of the background of gesture studies and the theories of gesture function and production followed by a review of research focused on adults with hearing loss and the role of gesture and gaze in rehabilitation. The current examination of the visual resource of co-speech gesture in the context of everyday interactions involving adults with acquired hearing loss suggests the need for the development of an evidence base to effect enhancements and changes in the way in which rehabilitation services are conducted.
Collapse
Affiliation(s)
- Karen Sparrow
- Audiology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| | - Christopher Lind
- Audiology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| | - Willem van Steenbrugge
- Speech Pathology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| |
Collapse
|
14
|
Vulchanova M, Vulchanov V, Fritz I, Milburn EA. Language and perception: Introduction to the Special Issue “Speakers and Listeners in the Visual World”. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2019. [DOI: 10.1007/s41809-019-00047-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
AbstractLanguage and perception are two central cognitive systems. Until relatively recently, however, the interaction between them has been examined only partially and not from an over-arching theoretical perspective. Yet it has become clear that linguistic and perceptual interactions are essential to understanding both typical and atypical human behaviour. In this editorial, we examine the link between language and perception across three domains. First, we present a brief review of work investigating the importance of perceptual features, particularly shape bias, when learning names for novel objects—a critical skill acquired during language development. Second, we describe the Visual World Paradigm, an experimental method uniquely suited to investigate the language-perception relationship. Studies using the Visual World Paradigm demonstrate that the relationship between linguistic and perceptual information during processing is both intricate and bi-directional: linguistic cues guide interpretation of visual scenes, while perceptual information shapes interpretation of linguistic input. Finally, we turn to a discussion of co-speech gesture focusing on iconic gestures which depict aspects of the visual world (e.g., motion, shape). The relationship between language and these semantically-meaningful gestures is likewise complex and bi-directional. However, more research is needed to illuminate the exact circumstances under which iconic gestures shape language production and comprehension. In conclusion, although strong evidence exists supporting a critical relationship between linguistic and perceptual systems, the exact levels at which these two systems interact, the time-course of the interaction, and what is driving the interaction, remain largely open questions in need of future research.
Collapse
|
15
|
Drijvers L, Vaitonytė J, Özyürek A. Degree of Language Experience Modulates Visual Attention to Visible Speech and Iconic Gestures During Clear and Degraded Speech Comprehension. Cogn Sci 2019; 43:e12789. [PMID: 31621126 PMCID: PMC6790953 DOI: 10.1111/cogs.12789] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 07/12/2019] [Accepted: 08/19/2019] [Indexed: 11/27/2022]
Abstract
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non-native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye-tracking to investigate whether and how native and highly proficient non-native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6-band noise-vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued-recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non-native listeners mostly gazed at the face during comprehension, but non-native listeners gazed more often at gestures than native listeners. However, only native but not non-native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non-native listeners might gaze at gesture more as it might be more challenging for non-native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non-native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non-native listeners.
Collapse
Affiliation(s)
- Linda Drijvers
- Donders Institute for Brain, Cognition, and BehaviourRadboud University
| | - Julija Vaitonytė
- Department of Cognitive and Artificial Intelligence (School of Humanities and Digital Sciences)Tilburg University
| | - Asli Özyürek
- Donders Institute for Brain, Cognition, and BehaviourRadboud University
- Centre for Language StudiesRadboud University
- Max Planck Institute for Psycholinguistics
| |
Collapse
|
16
|
Debreslioska S, van de Weijer J, Gullberg M. Addressees Are Sensitive to the Presence of Gesture When Tracking a Single Referent in Discourse. Front Psychol 2019; 10:1775. [PMID: 31456709 PMCID: PMC6700288 DOI: 10.3389/fpsyg.2019.01775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Accepted: 07/16/2019] [Indexed: 11/13/2022] Open
Abstract
Production studies show that anaphoric reference is bimodal. Speakers can introduce a referent in speech by also using a localizing gesture, assigning a specific locus in space to it. Referring back to that referent, speakers then often accompany a spoken anaphor with a localizing anaphoric gesture (i.e., indicating the same locus). Speakers thus create visual anaphoricity in parallel to the anaphoric process in speech. In the current perception study, we examine whether addressees are sensitive to localizing anaphoric gestures and specifically to the (mis)match between recurrent use of space and spoken anaphora. The results of two reaction time experiments show that, when a single referent is gesturally tracked, addressees are sensitive to the presence of localizing gestures, but not to their spatial congruence. Addressees thus seem to integrate gestural information when processing bimodal anaphora, but their use of locational information in gestures is not obligatory in every discourse context.
Collapse
Affiliation(s)
- Sandra Debreslioska
- Centre for Languages and Literature, Lund University, Lund, Sweden
- *Correspondence: Sandra Debreslioska,
| | - Joost van de Weijer
- Centre for Languages and Literature, Lund University, Lund, Sweden
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Marianne Gullberg
- Centre for Languages and Literature, Lund University, Lund, Sweden
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
17
|
Scott H, Batten JP, Kuhn G. Why are you looking at me? It's because I'm talking, but mostly because I'm staring or not doing much. Atten Percept Psychophys 2019; 81:109-118. [PMID: 30353500 PMCID: PMC6315010 DOI: 10.3758/s13414-018-1588-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Our attention is particularly driven toward faces, especially the eyes, and there is much debate over the factors that modulate this social attentional orienting. Most of the previous research has presented faces in isolation, and we tried to address this shortcoming by measuring people's eye movements whilst they observe more naturalistic and varied social interactions. Participants' eye movements were monitored whilst they watched three different types of social interactions (monologue, manual activity, active attentional misdirection), which were either accompanied by the corresponding audio as speech or by silence. Our results showed that (1) participants spent more time looking at the face when the person was giving a monologue, than when he/she was carrying out manual activities, and in the latter case they spent more time fixating on the person's hands. (2) Hearing speech significantly increases the amount of time participants spent looking at the face (this effect was relatively small), although this was not accounted for by any increase in mouth-oriented gaze. (3) Participants spent significantly more time fixating on the face when direct eye contact was established, and this drive to establish eye contact was significantly stronger in the manual activities than during the monologue. These results highlight people's strategic top-down control over when they attend to faces and the eyes, and support the view that we use our eyes to signal non-verbal information.
Collapse
Affiliation(s)
- Hannah Scott
- Department of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK
| | - Jonathan P Batten
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Gustav Kuhn
- Department of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK.
| |
Collapse
|
18
|
|
19
|
Abstract
Are the cues that speakers produce when lying the same cues that listeners attend to when attempting to detect deceit? We used a two-person interactive game to explore the production and perception of speech and nonverbal cues to lying. In each game turn, participants viewed pairs of images, with the location of some treasure indicated to the speaker but not to the listener. The speaker described the location of the treasure, with the objective of misleading the listener about its true location; the listener attempted to locate the treasure, based on their judgement of the speaker’s veracity. In line with previous comprehension research, listeners’ responses suggest that they attend primarily to behaviours associated with increased mental difficulty, perhaps because lying, under a cognitive hypothesis, is thought to cause an increased cognitive load. Moreover, a mouse-tracking analysis suggests that these judgements are made quickly, while the speakers’ utterances are still unfolding. However, there is a surprising mismatch between listeners and speakers: When producing false statements, speakers are less likely to produce the cues that listeners associate with lying. This production pattern is in keeping with an attempted control hypothesis, whereby liars may take into account listeners’ expectations and correspondingly manipulate their behaviour to avoid detection.
Collapse
|
20
|
Toward the markerless and automatic analysis of kinematic features: A toolkit for gesture and movement research. Behav Res Methods 2018; 51:769-777. [PMID: 30143970 PMCID: PMC6478643 DOI: 10.3758/s13428-018-1086-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.
Collapse
|
21
|
Wakefield E, Novack MA, Congdon EL, Franconeri S, Goldin-Meadow S. Gesture helps learners learn, but not merely by guiding their visual attention. Dev Sci 2018; 21:e12664. [PMID: 29663574 DOI: 10.1111/desc.12664] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 02/13/2018] [Indexed: 11/30/2022]
Abstract
Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.
Collapse
Affiliation(s)
- Elizabeth Wakefield
- Department of Psychology, University of Chicago, Chicago, IL, USA.,Department of Psychology, Loyola University, Chicago, IL, USA
| | - Miriam A Novack
- Department of Psychology, University of Chicago, Chicago, IL, USA.,Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Eliza L Congdon
- Department of Psychology, University of Chicago, Chicago, IL, USA.,Department of Psychology, Bucknell University, Lewisburg, PA, USA
| | - Steven Franconeri
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | | |
Collapse
|
22
|
Wakefield EM, Novack MA, Goldin-Meadow S. Unpacking the Ontogeny of Gesture Understanding: How Movement Becomes Meaningful Across Development. Child Dev 2017; 89:e245-e260. [PMID: 28504410 DOI: 10.1111/cdev.12817] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Gestures, hand movements that accompany speech, affect children's learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.
Collapse
|
23
|
Holler J, Bavelas J. Chapter 10. Multi-modal communication of common ground. GESTURE STUDIES 2017. [DOI: 10.1075/gs.7.11hol] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
24
|
Larsen T. Nurses' instruction of patients in the use of INR-monitors for self-management of cardio-vascular conditions: Missed instructional opportunities. PATIENT EDUCATION AND COUNSELING 2017; 100:673-681. [PMID: 27839890 DOI: 10.1016/j.pec.2016.10.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 09/29/2016] [Accepted: 10/01/2016] [Indexed: 06/06/2023]
Abstract
OBJECTIVE To explore the effectiveness of a patient education programme for chronic disease self-management in terms of whether (a) patients are taught to perform the medical procedure and (b) nurses have evidence of patients' proficiency when they start self-management. METHODS Patients were followed through an education programme for oral anticoagulation therapy, involving the use of INR-monitors. Training sessions were video-recorded and analyzed using Conversation Analysis. 55 instructional opportunities were identified, and the relationship between instructional response and patients' subsequent (un)successful demonstration of the procedure traced. RESULTS Patient errors provide the most frequent type of instructional opportunity, but not all are addressed; a significant number is allowed to pass uncorrected. Consequently, patients are not given the opportunity to learn. In the majority of cases where instructional opportunities are missed, patients subsequently do not demonstrate a correct understanding of the procedure. CONCLUSION Patients are allowed to start self-management although nurses do not have evidence that they are capable of performing all aspects of the medical procedure correctly. PRACTICE IMPLICATIONS Effective practice suggests that nurses take measures to minimize the amount of missed instructional opportunities that arise and ensure that errors are pursued until patients demonstrate proficiency in all aspects of the procedure.
Collapse
Affiliation(s)
- Tine Larsen
- Department of Design and Communication, University of Southern Denmark, Universitetsparken 1, DK-6000 Kolding, Denmark.
| |
Collapse
|
25
|
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study. PLoS One 2016; 11:e0146583. [PMID: 26735917 PMCID: PMC4703302 DOI: 10.1371/journal.pone.0146583] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Accepted: 12/18/2015] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. METHOD Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. RESULTS In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. CONCLUSION Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Collapse
|
26
|
|
27
|
Pyers JE, Perniss P, Emmorey K. Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective. SPATIAL COGNITION AND COMPUTATION 2015; 15:143-169. [PMID: 26981027 PMCID: PMC4788639 DOI: 10.1080/13875868.2014.1003933] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.
Collapse
Affiliation(s)
- Jennie E Pyers
- Wellesley College, Psychology Department, Wellesley, MA 02481, USA
| | - Pamela Perniss
- University of Brighton, School of Humanities, Checkland Building, BN1 9PH Brighton, UK,
| | - Karen Emmorey
- San Diego State University, Laboratory for Language and Cognitive Neuroscience, 6495 Alvarado Road, Suite 200, San Diego, CA 92120,
| |
Collapse
|
28
|
Vanbellingen T, Schumacher R, Eggenberger N, Hopfner S, Cazzoli D, Preisig BC, Bertschi M, Nyffeler T, Gutbrod K, Bassetti CL, Bohlhalter S, Müri RM. Different visual exploration of tool-related gestures in left hemisphere brain damaged patients is associated with poor gestural imitation. Neuropsychologia 2015; 71:158-64. [PMID: 25841335 DOI: 10.1016/j.neuropsychologia.2015.04.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2014] [Revised: 02/09/2015] [Accepted: 04/01/2015] [Indexed: 10/23/2022]
Abstract
According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis.
Collapse
Affiliation(s)
- Tim Vanbellingen
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland; Neurology and Neurorehabilitation Center, Luzerner Kantonsspital, Switzerland
| | - Rahel Schumacher
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland; Division of Cognitive and Restorative Neurology, Department of Neurology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - Noëmi Eggenberger
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland
| | - Simone Hopfner
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland
| | - Dario Cazzoli
- Nuffield Department of Clinical Neurosciences, Clinical Neurology, University of Oxford, Oxford, United Kingdom
| | - Basil C Preisig
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland
| | - Manuel Bertschi
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland
| | - Thomas Nyffeler
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland; Neurology and Neurorehabilitation Center, Luzerner Kantonsspital, Switzerland
| | - Klemens Gutbrod
- Division of Cognitive and Restorative Neurology, Department of Neurology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - Claudio L Bassetti
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland
| | - Stephan Bohlhalter
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland; Neurology and Neurorehabilitation Center, Luzerner Kantonsspital, Switzerland
| | - René M Müri
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, Inselspital, University Hospital Bern, Switzerland; Division of Cognitive and Restorative Neurology, Department of Neurology, Inselspital, Bern University Hospital, and University of Bern, Switzerland; Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland; Center for Cognition, Learning and Memory, University of Bern, Bern, Switzerland.
| |
Collapse
|
29
|
Handmade Memories: The Robustness of the Gestural Misinformation Effect in Children’s Eyewitness Interviews. JOURNAL OF NONVERBAL BEHAVIOR 2015. [DOI: 10.1007/s10919-015-0210-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
30
|
Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations. Cortex 2014; 64:157-68. [PMID: 25461716 DOI: 10.1016/j.cortex.2014.10.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Revised: 07/07/2014] [Accepted: 10/20/2014] [Indexed: 01/09/2023]
Abstract
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Collapse
|
31
|
Rowbotham S, Lloyd DM, Holler J, Wearden A. Externalizing the private experience of pain: a role for co-speech gestures in pain communication? HEALTH COMMUNICATION 2014; 30:70-80. [PMID: 24483213 DOI: 10.1080/10410236.2013.836070] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced "co-speech hand gestures" may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication.
Collapse
|
32
|
Gurney DJ, Pine KJ, Wiseman R. The gestural misinformation effect: skewing eyewitness testimony through gesture. AMERICAN JOURNAL OF PSYCHOLOGY 2013; 126:301-14. [PMID: 24027944 DOI: 10.5406/amerjpsyc.126.3.0301] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The susceptibility of eyewitnesses to verbal suggestion has been well documented, although little attention has been paid to the role of nonverbal communication in misinformation. Three experiments are reported; in each, participants watched footage of a crime scene before being questioned about what they had observed. In Experiments 1 and 2, an on-screen interviewer accompanied identically worded questions with gestures that either conveyed accurate information about the scene or conveyed false, misleading information. The misleading gestures significantly influenced recall, and participants' responses were consistent with the gestured information. In Experiment 3, a live interview was conducted, and the gestural misinformation effect was found to be robust; participants were influenced by misleading gestures performed by the interviewer during questioning. These findings provide compelling evidence for the gestural misinformation effect, whereby subtle hand gestures can implant information and distort the testimony of eyewitnesses. The practical and legal implications of these findings are discussed.
Collapse
Affiliation(s)
- Daniel J Gurney
- School of Psychology, University of Hertfordshire, Hatfield, UK.
| | | | | |
Collapse
|
33
|
Ahlsén E, Schwarz A. Features of aphasic gesturing--an exploratory study of features in gestures produced by persons with and without aphasia. CLINICAL LINGUISTICS & PHONETICS 2013; 27:823-836. [PMID: 23889213 DOI: 10.3109/02699206.2013.813077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The purpose of this study was to see how features of gestures produced by persons with aphasia (PWA) are affected and to relate the findings to possible underlying factors. Spontaneous gestures were studied in two contexts: (i) associated with the production of nouns and verbs and (ii) in relation to word finding or production difficulties. The method involved assembling two datasets of co-speech gestures, produced by PWA and by persons without aphasia and to code the gestures for a number of features of expression and content. Features that were affected in the Aphasia dataset were gaze, head movements, hand use and semantic features. The results point to possibly converging explanations, such as generally lower semantic complexity as a direct effect of the aphasia, more cognitive effort and/or a greater dependence on one-hand gestures leading more indirectly to increased gaze aversion, more head shakes and lower complexity in gestures in PWA.
Collapse
Affiliation(s)
- Elisabeth Ahlsén
- SCCIIL Interdisciplinary Center and Division of Communication and Cognition, Department of Applied Information Technology, University of Gothenburg , Göteborg , Sweden
| | | |
Collapse
|
34
|
Mol L, Krahmer E, van de Sandt-Koenderman M. Gesturing by speakers with aphasia: how does it compare? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1224-1236. [PMID: 23275428 DOI: 10.1044/1092-4388(2012/11-0159)] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
PURPOSE To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. METHOD The informativeness of gesture was assessed in 3 forced-choice studies, in which raters assessed the topic of the speaker's message in video clips of 13 speakers with moderate aphasia and 12 speakers with severe aphasia, who were performing a communication test (the Scenario Test). Both groups were compared and contrasted with 17 control participants, who either were or were not allowed to communicate verbally. In addition, the representation techniques used in gesture were analyzed. RESULTS Gestures produced by speakers with more severe aphasia were less informative than those by speakers with moderate aphasia, yet they were not necessarily uninformative. Speakers with more severe aphasia also tended to use fewer representation techniques (mostly relying on outlining gestures) in co-speech gesture than control participants, who were asked to use gesture instead of speech. It is important to note that limb apraxia may be a mediating factor here. CONCLUSIONS These results suggest that in aphasia, gesture tends to degrade with verbal language. This may imply that the processes underlying verbal language and co-speech gesture production, although partly separate, are closely linked.
Collapse
Affiliation(s)
- Lisette Mol
- Tilburg Center for Cognition and Communication, Tilburg University, the Netherlands.
| | | | | |
Collapse
|
35
|
Rowbotham S, Holler J, Lloyd D, Wearden A. How Do We Communicate About Pain? A Systematic Analysis of the Semantic Contribution of Co-speech Gestures in Pain-focused Conversations. JOURNAL OF NONVERBAL BEHAVIOR 2011. [DOI: 10.1007/s10919-011-0122-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
36
|
Abstract
AbstractAccording to the SIMS model, mimicry and simulation contribute to perceivers' understanding of smiles. We argue that similar mechanisms are involved in comprehending the hand gestures that people produce when speaking. Viewing gestures may elicit overt mimicry, or may evoke corresponding simulations in the minds of addressees. These real or simulated actions contribute to addressees' comprehension of speakers' gestures.
Collapse
|