1
|
Kelly SD, Ngo Tran QA. Exploring the Emotional Functions of Co-Speech Hand Gesture in Language and Communication. Top Cogn Sci 2023. [PMID: 37115518 DOI: 10.1111/tops.12657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Research over the past four decades has built a convincing case that co-speech hand gestures play a powerful role in human cognition . However, this recent focus on the cognitive function of gesture has, to a large extent, overlooked its emotional role-a role that was once central to research on bodily expression. In the present review, we first give a brief summary of the wealth of research demonstrating the cognitive function of co-speech gestures in language acquisition, learning, and thinking. Building on this foundation, we revisit the emotional function of gesture across a wide range of communicative contexts, from clinical to artistic to educational, and spanning diverse fields, from cognitive neuroscience to linguistics to affective science. Bridging the cognitive and emotional functions of gesture highlights promising avenues of research that have varied practical and theoretical implications for human-machine interactions, therapeutic interventions, language evolution, embodied cognition, and more.
Collapse
Affiliation(s)
- Spencer D Kelly
- Department of Psychological and Brain Sciences, Center for Language and Brain, Colgate University, 13 Oak Dr., Hamilton, NY, 13346, United States
| | - Quang-Anh Ngo Tran
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405, United States
| |
Collapse
|
2
|
Wei Y, Jia L, Gao F, Wang J. Visual-Auditory Integration and High-Variability Speech Can Facilitate Mandarin Chinese Tone Identification. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4096-4111. [PMID: 36279876 DOI: 10.1044/2022_jslhr-21-00691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Previous studies have demonstrated that tone identification can be facilitated when auditory tones are integrated with visual information that depicts the pitch contours of the auditory tones (hereafter, visual effect). This study investigates this visual effect in combined visual-auditory integration with high- and low-variability speech and examines whether one's prior tonal-language learning experience shapes the strength of this visual effect. METHOD Thirty Mandarin-naïve listeners, 25 Mandarin second language learners, and 30 native Mandarin listeners participated in a tone identification task in which participants judged whether an auditory tone was rising or falling in pitch. Moving arrows depicted the pitch contours of the auditory tones. A priming paradigm was used with the target auditory tones primed by four multimodal conditions: no stimuli (A-V-), visual-only stimuli (A-V+), auditory-only stimuli (A+V-), and both auditory and visual stimuli (A+V+). RESULTS For Mandarin naïve listeners, the visual effect in accuracy produced under the cross-modal integration (A+V+ vs. A+V-) was superior to a unimodal approach (A-V+ vs. A-V-), as evidenced by a higher d prime of A+V+ as opposed to A+V-. However, this was not the case in response time. Additionally, the visual effect in accuracy and response time under the unimodal approach only occurred for high-variability speech, not for low-variability speech. Across the three groups of listeners, we found that the less tonal-language learning experience one had, the stronger the visual effect. CONCLUSION Our study revealed the visual-auditory advantage and disadvantage of the visual effect and the joint contribution of visual-auditory integration and high-variability speech on facilitating tone perception via the process of speech symbolization and categorization. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357729.
Collapse
Affiliation(s)
- Yanjun Wei
- Center for Cognitive Science of Language, Beijing Language and Culture University, China
| | - Lin Jia
- Beijing Chinese Language and Culture College, China
| | - Fei Gao
- Faculty of Arts and Humanities, University of Macau, China
- Centre for Cognitive and Brain Sciences, University of Macau, China
| | - Jianqin Wang
- Center for Cognitive Science of Language, Beijing Language and Culture University, China
| |
Collapse
|
3
|
Li M, Chen X, Zhu J, Chen F. Audiovisual Mandarin Lexical Tone Perception in Quiet and Noisy Contexts: The Influence of Visual Cues and Speech Rate. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4385-4403. [PMID: 36269618 DOI: 10.1044/2022_jslhr-22-00024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Armed with the theory of embodied cognition proposing tight interactions between perception, motor, and cognition, this study aimed to test the hypothesis that speech rate-altered Mandarin lexical tone perception in quiet and noisy environments could be affected by the bodily dynamic cross-modal information. METHOD Fifty-three adult listeners completed a Mandarin tone perception task with 720 tone stimuli in auditory-only (AO), auditory-facial (AF), and auditory-facial-plus-gestural (AFG) modalities, at fast, normal, and slow speech rates under quiet and noisy conditions. In AF and AFG modalities, both congruent and incongruent audiovisual information were designed and presented. Generalized linear mixed-effects models were constructed to analyze the accuracy of tone perception across different conditions. RESULTS In Mandarin tone perception, the magnitude of enhancement of AF and AFG cues across three speech rates was significantly higher than that of the AO cue in the adverse context of noise, yet additional metaphoric gestures did not show significant differences from the facial information. Furthermore, the performance of auditory tone perception at the fast speech rate was significantly better than that at the normal speech rate when the inputs were incongruent between auditory and visual channels in quiet. CONCLUSIONS This study provided compelling evidence showing that integrated audiovisual information plays a vital role not only in improving lexical tone perception in noise but also in modulating the effects of speech rate on Mandarin tone perception in quiet for native listeners. Our findings, supporting the theory of embodied cognition, are implicational for speech and hearing rehabilitation among both young and old clinical populations.
Collapse
Affiliation(s)
- Manhong Li
- School of Foreign Languages, Hunan University, Changsha, China
- School of Foreign Languages, Hunan First Normal University, Changsha, China
| | - Xiaoxiang Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Jiaqiang Zhu
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| |
Collapse
|
4
|
Yao R, Guan CQ, Smolen ER, MacWhinney B, Meng W, Morett LM. Gesture-Speech Integration in Typical and Atypical Adolescent Readers. Front Psychol 2022; 13:890962. [PMID: 35719574 PMCID: PMC9204151 DOI: 10.3389/fpsyg.2022.890962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/09/2022] [Indexed: 11/24/2022] Open
Abstract
This study investigated gesture-speech integration (GSI) among adolescents who are deaf or hard of hearing (DHH) and those with typical hearing. Thirty-eight adolescents (19 with hearing loss) performed a Stroop-like task in which they watched 120 short video clips of gestures and actions twice at random. Participants were asked to press one button if the visual content of the speaker's movements was related to a written word and to press another button if it was unrelated to a written word while accuracy rates and response times were recorded. We found stronger GSI effects among DHH participants than hearing participants. The semantic congruency effect was significantly larger in DHH participants than in hearing participants, and results of our experiments indicated a significantly larger gender congruency effect in DHH participants as compared to hearing participants. Results of this study shed light on GSI among DHH individuals and suggest future avenues for research examining the impact of gesture on language processing and communication in this population.
Collapse
Affiliation(s)
- Ru Yao
- China National Institute of Education Sciences, Beijing, China
| | - Connie Qun Guan
- School of Foreign Studies, Beijing Language and Culture University, Beijing, China
| | - Elaine R. Smolen
- Teachers College, Columbia University, New York, NY, United States
| | - Brian MacWhinney
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Wanjin Meng
- Department of Moral, Psychological and Special Education, China National Institute of Education Sciences, Beijing, China
| | - Laura M. Morett
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, University of Alabama, Tuscaloosa, AL, United States
| |
Collapse
|
5
|
Morett LM, Feiler JB, Getz LM. Elucidating the influences of embodiment and conceptual metaphor on lexical and non-speech tone learning. Cognition 2022; 222:105014. [DOI: 10.1016/j.cognition.2022.105014] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 01/04/2022] [Accepted: 01/05/2022] [Indexed: 11/25/2022]
|
6
|
Holler J, Drijvers L, Rafiee A, Majid A. Embodied Space-pitch Associations are Shaped by Language. Cogn Sci 2022; 46:e13083. [PMID: 35188682 DOI: 10.1111/cogs.13083] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 11/29/2021] [Accepted: 12/02/2021] [Indexed: 11/28/2022]
Abstract
Height-pitch associations are claimed to be universal and independent of language, but this claim remains controversial. The present study sheds new light on this debate with a multimodal analysis of individual sound and melody descriptions obtained in an interactive communication paradigm with speakers of Dutch and Farsi. The findings reveal that, in contrast to Dutch speakers, Farsi speakers do not use a height-pitch metaphor consistently in speech. Both Dutch and Farsi speakers' co-speech gestures did reveal a mapping of higher pitches to higher space and lower pitches to lower space, and this gesture space-pitch mapping tended to co-occur with corresponding spatial words (high-low). However, this mapping was much weaker in Farsi speakers than Dutch speakers. This suggests that cross-linguistic differences shape the conceptualization of pitch and further calls into question the universality of height-pitch associations.
Collapse
Affiliation(s)
- Judith Holler
- Donders Institute for Brain, Cognition & Behaviors, Radboud University.,Language & Cognition and Neurobiology of Language Departments, Max Planck Institute for Psycholinguistics
| | - Linda Drijvers
- Donders Institute for Brain, Cognition & Behaviors, Radboud University.,Language & Cognition and Neurobiology of Language Departments, Max Planck Institute for Psycholinguistics
| | | | - Asifa Majid
- Canter for Language Studies, Radboud University.,Department of Psychology, University of York
| |
Collapse
|
7
|
Billot-Vasquez K, Lian Z, Hirata Y, Kelly SD. Emblem Gestures Improve Perception and Evaluation of Non-native Speech. Front Psychol 2020; 11:574418. [PMID: 33071912 PMCID: PMC7536367 DOI: 10.3389/fpsyg.2020.574418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 08/19/2020] [Indexed: 01/02/2023] Open
Abstract
Traditionally, much of the attention on the communicative effects of non-native accent has focused on the accent itself rather than how it functions within a more natural context. The present study explores how the bodily context of co-speech emblematic gestures affects perceptual and social evaluation of non-native accent. In two experiments in two different languages, Mandarin and Japanese, we filmed learners performing a short utterance in three different within-subjects conditions: speech alone, culturally familiar gesture, and culturally unfamiliar gesture. Native Mandarin participants watched videos of foreign-accented Mandarin speakers (Experiment 1), and native Japanese participants watched videos of foreign-accented Japanese speakers (Experiment 2). Following each video, native language participants were asked a set of questions targeting speech perception and social impressions of the learners. Results from both experiments demonstrate that familiar—and occasionally unfamiliar—emblems facilitated speech perception and enhanced social evaluations compared to the speech alone baseline. The variability in our findings suggests that gesture may serve varied functions in the perception and evaluation of non-native accent.
Collapse
Affiliation(s)
- Kiana Billot-Vasquez
- Department of Psychological and Brain Sciences, Colgate University, Hamilton, NY, United States.,Center for Language and Brain, Hamilton, NY, United States
| | - Zhongwen Lian
- Center for Language and Brain, Hamilton, NY, United States.,Linguistics Program, Colgate University, Hamilton, NY, United States
| | - Yukari Hirata
- Center for Language and Brain, Hamilton, NY, United States.,Linguistics Program, Colgate University, Hamilton, NY, United States.,Department of East Asian Languages, Colgate University, Hamilton, NY, United States
| | - Spencer D Kelly
- Department of Psychological and Brain Sciences, Colgate University, Hamilton, NY, United States.,Center for Language and Brain, Hamilton, NY, United States.,Linguistics Program, Colgate University, Hamilton, NY, United States
| |
Collapse
|