1
|
Asalıoğlu EN, Göksun T. The role of hand gestures in emotion communication: Do type and size of gestures matter? PSYCHOLOGICAL RESEARCH 2023; 87:1880-1898. [PMID: 36436110 DOI: 10.1007/s00426-022-01774-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 11/17/2022] [Indexed: 11/28/2022]
Abstract
We communicate emotions in a multimodal way, yet non-verbal emotion communication is a relatively understudied area of research. In three experiments, we investigated the role of gesture characteristics (e.g., type, size in space) on individuals' processing of emotional content. In Experiment 1, participants were asked to rate the emotional intensity of emotional narratives from the videoclips either with iconic or beat gestures. Participants in the iconic gesture condition rated the emotional intensity higher than participants in the beat gesture condition. In Experiment 2, the size of gestures and its interaction with gesture type were investigated in a within-subjects design. Participants again rated the emotional intensity of emotional narratives from the videoclips. Although individuals overall rated narrow gestures more emotionally intense than wider gestures, no effects of gesture type, or gesture size and type interaction were found. Experiment 3 was conducted to check whether findings of Experiment 2 were due to viewing gestures in all videoclips. We compared the gesture and no gesture (i.e., speech only) conditions and showed that there was not a difference between them on emotional ratings. However, we could not replicate the findings related to gesture size of Experiment 2. Overall, these findings indicate the importance of examining gesture's role in emotional contexts and that different gesture characteristics such as size of gestures can be considered in nonverbal communication.
Collapse
Affiliation(s)
- Esma Nur Asalıoğlu
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Tilbe Göksun
- Department of Psychology, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey.
| |
Collapse
|
2
|
Royka A, Chen A, Aboody R, Huanca T, Jara-Ettinger J. People infer communicative action through an expectation for efficient communication. Nat Commun 2022; 13:4160. [PMID: 35851397 PMCID: PMC9293910 DOI: 10.1038/s41467-022-31716-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 06/30/2022] [Indexed: 11/09/2022] Open
Abstract
Humans often communicate using body movements like winks, waves, and nods. However, it is unclear how we identify when someone’s physical actions are communicative. Given people’s propensity to interpret each other’s behavior as aimed to produce changes in the world, we hypothesize that people expect communicative actions to efficiently reveal that they lack an external goal. Using computational models of goal inference, we predict that movements that are unlikely to be produced when acting towards the world and, in particular, repetitive ought to be seen as communicative. We find support for our account across a variety of paradigms, including graded acceptability tasks, forced-choice tasks, indirect prompts, and open-ended explanation tasks, in both market-integrated and non-market-integrated communities. Our work shows that the recognition of communicative action is grounded in an inferential process that stems from fundamental computations shared across different forms of action interpretation. Humans can quickly infer when someone’s body movements are meant to be communicative. Here, the authors show that this capacity is underpinned by an expectation that communicative actions will efficiently reveal that they lack an external goal.
Collapse
Affiliation(s)
- Amanda Royka
- Department of Psychology, Yale University, New Haven, CT, USA.
| | - Annie Chen
- Department of Computer Science, Yale University, New Haven, CT, USA
| | - Rosie Aboody
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Tomas Huanca
- Centro Boliviano de Desarrollo Socio-Integral, La paz, Bolivia
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, New Haven, CT, USA. .,Department of Computer Science, Yale University, New Haven, CT, USA. .,Wu Tsai Institute, Yale University, New Haven, CT, USA.
| |
Collapse
|
3
|
Trujillo JP, Levinson SC, Holler J. A multi-scale investigation of the human communication system's response to visual disruption. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211489. [PMID: 35425638 PMCID: PMC9006025 DOI: 10.1098/rsos.211489] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 03/25/2022] [Indexed: 05/03/2023]
Abstract
In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are dynamically co-adapted to meet communicative needs. To this end, we parametrically reduce visibility during casual conversational interaction and measure the effects on speakers' communicative behaviour using motion tracking and manual annotation for kinematic and acoustic analyses. We found that visual signalling effort was flexibly adapted in response to a decrease in visual quality (especially motion energy, gesture rate, size, velocity and hold-time). Interestingly, speech was also affected: speech intensity increased in response to reduced visual quality (particularly in speech-gesture utterances, but independently of kinematics). Our findings highlight that multi-modal communicative behaviours are flexibly adapted at multiple scales of measurement and question the notion that gesture plays an inferior role to speech.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| | - Stephen C. Levinson
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
4
|
Trujillo JP, Özyürek A, Kan CC, Sheftel-Simanova I, Bekkering H. Differences in the production and perception of communicative kinematics in autism. Autism Res 2021; 14:2640-2653. [PMID: 34536063 PMCID: PMC9292179 DOI: 10.1002/aur.2611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/14/2021] [Accepted: 09/01/2021] [Indexed: 11/07/2022]
Abstract
In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals. LAY SUMMARY: Hand gestures are an important part of how we communicate, and the way that we move when gesturing can influence how easy a gesture is to understand. We studied how autistic and typical individuals produce and recognize hand gestures, and how this relates to movement characteristics. We found that autistic individuals moved differently when gesturing compared to typical individuals. In addition, while autistic individuals were not worse at recognizing gestures, they differed from typical individuals in how they interpreted certain movement characteristics.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Centre for Cognition, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Asli Özyürek
- Donders Centre for Cognition, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Cornelis C Kan
- Department of Psychiatry, Radboud University Medical Centre, Radboudumc, Nijmegen, The Netherlands
| | - Irina Sheftel-Simanova
- One Planet Research Centre, Radboud University Medical Centre, Radboudumc, Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Centre for Cognition, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Strachan JWA, Curioni A, Constable MD, Knoblich G, Charbonneau M. Evaluating the relative contributions of copying and reconstruction processes in cultural transmission episodes. PLoS One 2021; 16:e0256901. [PMID: 34529662 PMCID: PMC8445411 DOI: 10.1371/journal.pone.0256901] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/17/2021] [Indexed: 11/19/2022] Open
Abstract
The ability to transmit information between individuals through social learning is a foundational component of cultural evolution. However, how this transmission occurs is still debated. On the one hand, the copying account draws parallels with biological mechanisms for genetic inheritance, arguing that learners copy what they observe and novel variations occur through random copying errors. On the other hand, the reconstruction account claims that, rather than directly copying behaviour, learners reconstruct the information that they believe to be most relevant on the basis of pragmatic inference, environmental and contextual cues. Distinguishing these two accounts empirically is difficult based on data from typical transmission chain studies because the predictions they generate frequently overlap. In this study we present a methodological approach that generates different predictions of these accounts by manipulating the task context between model and learner in a transmission episode. We then report an empirical proof-of-concept that applies this approach. The results show that, when a model introduces context-dependent embedded signals to their actions that are not intended to be transmitted, it is possible to empirically distinguish between competing predictions made by these two accounts. Our approach can therefore serve to understand the underlying cognitive mechanisms at play in cultural transmission and can make important contributions to the debate between preservative and reconstructive schools of thought.
Collapse
Affiliation(s)
- James W. A. Strachan
- Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
- Department of Cognitive Science, Central European University, Vienna, Austria
| | - Arianna Curioni
- Department of Cognitive Science, Central European University, Vienna, Austria
| | - Merryn D. Constable
- Department of Cognitive Science, Central European University, Vienna, Austria
- Northumbria University, Newcastle upon Tyne, United Kingdom
| | - Günther Knoblich
- Department of Cognitive Science, Central European University, Vienna, Austria
| | - Mathieu Charbonneau
- Department of Cognitive Science, Central European University, Vienna, Austria
| |
Collapse
|
6
|
Trujillo J, Özyürek A, Holler J, Drijvers L. Speakers exhibit a multimodal Lombard effect in noise. Sci Rep 2021; 11:16721. [PMID: 34408178 PMCID: PMC8373897 DOI: 10.1038/s41598-021-95791-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/29/2021] [Indexed: 12/03/2022] Open
Abstract
In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.
Collapse
Affiliation(s)
- James Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands.
| | - Asli Özyürek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Trujillo JP, Holler J. The Kinematics of Social Action: Visual Signals Provide Cues for What Interlocutors Do in Conversation. Brain Sci 2021; 11:996. [PMID: 34439615 PMCID: PMC8393665 DOI: 10.3390/brainsci11080996] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/07/2021] [Accepted: 07/23/2021] [Indexed: 11/17/2022] Open
Abstract
During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing-requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 GD Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 GD Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
8
|
Pouw W, Dingemanse M, Motamedi Y, Özyürek A. A Systematic Investigation of Gesture Kinematics in Evolving Manual Languages in the Lab. Cogn Sci 2021; 45:e13014. [PMID: 34288069 PMCID: PMC8365719 DOI: 10.1111/cogs.13014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 05/25/2021] [Accepted: 06/09/2021] [Indexed: 11/29/2022]
Abstract
Silent gestures consist of complex multi‐articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen.,Max Planck Institute for Psycholinguistics, Radboud University Nijmegen
| | - Mark Dingemanse
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen.,Center for Language Studies, Radboud University Nijmegen
| | | | - Aslı Özyürek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen.,Max Planck Institute for Psycholinguistics, Radboud University Nijmegen.,Center for Language Studies, Radboud University Nijmegen
| |
Collapse
|
9
|
He Y, Luell S, Muralikrishnan R, Straube B, Nagels A. Gesture's body orientation modulates the N400 for visual sentences primed by gestures. Hum Brain Mapp 2020; 41:4901-4911. [PMID: 32808721 PMCID: PMC7643362 DOI: 10.1002/hbm.25166] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/16/2020] [Accepted: 07/23/2020] [Indexed: 01/08/2023] Open
Abstract
Body orientation of gesture entails social-communicative intention, and may thus influence how gestures are perceived and comprehended together with auditory speech during face-to-face communication. To date, despite the emergence of neuroscientific literature on the role of body orientation on hand action perception, limited studies have directly investigated the role of body orientation in the interaction between gesture and language. To address this research question, we carried out an electroencephalography (EEG) experiment presenting to participants (n = 21) videos of frontal and lateral communicative hand gestures of 5 s (e.g., raising a hand), followed by visually presented sentences that are either congruent or incongruent with the gesture (e.g., "the mountain is high/low…"). Participants underwent a semantic probe task, judging whether a target word is related or unrelated to the gesture-sentence event. EEG results suggest that, during the perception phase of handgestures, while both frontal and lateral gestures elicited a power decrease in both the alpha (8-12 Hz) and the beta (16-24 Hz) bands, lateral versus frontal gestures elicited reduced power decrease in the beta band, source-located to the medial prefrontal cortex. For sentence comprehension, at the critical word whose meaning is congruent/incongruent with the gesture prime, frontal gestures elicited an N400 effect for gesture-sentence incongruency. More importantly, this incongruency effect was significantly reduced for lateral gestures. These findings suggest that body orientation plays an important role in gesture perception, and that its inferred social-communicative intention may influence gesture-language interaction at semantic level.
Collapse
Affiliation(s)
- Yifei He
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgMarburgGermany
| | - Svenja Luell
- Department of General LinguisticsJohannes‐Gutenberg University MainzMainzGermany
| | - R. Muralikrishnan
- Department of NeuroscienceMax Planck Institute for Empirical AestheticsFrankfurtGermany
| | - Benjamin Straube
- Department of Psychiatry and PsychotherapyPhilipps‐University MarburgMarburgGermany
| | - Arne Nagels
- Department of General LinguisticsJohannes‐Gutenberg University MainzMainzGermany
| |
Collapse
|