1
|
Bujok R, Meyer AS, Bosker HR. Audiovisual Perception of Lexical Stress: Beat Gestures and Articulatory Cues. LANGUAGE AND SPEECH 2024:238309241258162. [PMID: 38877720 DOI: 10.1177/00238309241258162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2024]
Abstract
Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.
Collapse
Affiliation(s)
- Ronny Bujok
- Max Planck Institute for Psycholinguistics, The Netherlands
- International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society, The Netherlands
| | | | - Hans Rutger Bosker
- Max Planck Institute for Psycholinguistics, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, The Netherlands
| |
Collapse
|
2
|
Weissman B, Cohn N, Tanner D. The electrophysiology of lexical prediction of emoji and text. Neuropsychologia 2024; 198:108881. [PMID: 38579906 DOI: 10.1016/j.neuropsychologia.2024.108881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 02/18/2024] [Accepted: 03/27/2024] [Indexed: 04/07/2024]
Abstract
As emoji often appear naturally alongside text in utterances, they provide a way to study how prediction unfolds in multimodal sentences in direct comparison to unimodal sentences. In this experiment, participants (N = 40) read sentences in which the sentence-final noun appeared in either word form or emoji form, a between-subjects manipulation. The experiment featured both high constraint sentences and low constraint sentences to examine how the lexical processing of emoji interacts with prediction processes in sentence comprehension. Two well-established ERP components linked to lexical processing and prediction - the N400 and the Late Frontal Positivity - are investigated for sentence-final words and emoji to assess whether, to what extent, and in what linguistic contexts emoji are processed like words. Results indicate that the expected effects, namely an N400 effect to an implausible lexical item compared to a plausible one and an LFP effect to an unexpected lexical item compared to an expected one, emerged for both words and emoji. This paper discusses the similarities and differences between the stimulus types and constraint conditions, contextualized within theories of linguistic prediction, ERP components, and a multimodal lexicon.
Collapse
Affiliation(s)
- Benjamin Weissman
- Department of Cognitive Science Rensselaer Polytechnic Institute 110 8th Street, Troy, NY, 12180, USA; Department of Linguistics University of Illinois at Urbana-Champaign 707 S Mathews Ave, Urbana, IL, 61801, USA.
| | - Neil Cohn
- Department of Communication and Cognition Tilburg University PO Box 90153, 5000, LE Tilburg, the Netherlands
| | - Darren Tanner
- Department of Linguistics University of Illinois at Urbana-Champaign 707 S Mathews Ave, Urbana, IL, 61801, USA; AI For Good Lab Microsoft 1 Microsoft Way, Redmond, WA, USA
| |
Collapse
|
3
|
Nirme J, Gulz A, Haake M, Gullberg M. Early or synchronized gestures facilitate speech recall-a study based on motion capture data. Front Psychol 2024; 15:1345906. [PMID: 38596333 PMCID: PMC11002957 DOI: 10.3389/fpsyg.2024.1345906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/07/2024] [Indexed: 04/11/2024] Open
Abstract
Introduction Temporal co-ordination between speech and gestures has been thoroughly studied in natural production. In most cases gesture strokes precede or coincide with the stressed syllable in words that they are semantically associated with. Methods To understand whether processing of speech and gestures is attuned to such temporal coordination, we investigated the effect of delaying, preposing or eliminating individual gestures on the memory for words in an experimental study in which 83 participants watched video sequences of naturalistic 3D-animated speakers generated based on motion capture data. A target word in the sequence appeared (a) with a gesture presented in its original position synchronized with speech, (b) temporally shifted 500 ms before or (c) after the original position, or (d) with the gesture eliminated. Participants were asked to retell the videos in a free recall task. The strength of recall was operationalized as the inclusion of the target word in the free recall. Results Both eliminated and delayed gesture strokes resulted in reduced recall rates compared to synchronized strokes, whereas there was no difference between advanced (preposed) and synchronized strokes. An item-level analysis also showed that the greater the interval between the onsets of delayed strokes and stressed syllables in target words, the greater the negative effect was on recall. Discussion These results indicate that speech-gesture synchrony affects memory for speech, and that temporal patterns that are common in production lead to the best recall. Importantly, the study also showcases a procedure for using motion capture-based 3D-animated speakers to create an experimental paradigm for the study of speech-gesture comprehension.
Collapse
Affiliation(s)
- Jens Nirme
- Lund University Cognitive Science, Lund, Sweden
| | - Agneta Gulz
- Lund University Cognitive Science, Lund, Sweden
| | | | - Marianne Gullberg
- Centre for Languages and Literature and Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
4
|
Türk O, Calhoun S. Phrasal Synchronization of Gesture With Prosody and Information Structure. LANGUAGE AND SPEECH 2023:238309231185308. [PMID: 37522627 DOI: 10.1177/00238309231185308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
This study investigates the synchronization of manual gestures with prosody and information structure using Turkish natural speech data. Prosody has long been linked to gesture as a key driver of gesture-speech synchronization. Gesture has a hierarchical phrasal structure similar to prosody. At the lowest level, gesture has been shown to be synchronized with prosody (e.g., apexes and pitch accents). However, less is known about higher levels. Even less is known about timing relationships with information structure, though this is signaled by prosody and linked to gesture. The present study analyzed phrase synchronization in 3 hr of narrations in Turkish annotated for gesture, prosody, and information structure-topics and foci. The analysis of 515 gesture phrases showed that there was no one-to-one synchronization with intermediate phrases, but their onsets and offsets were synchronized. Moreover, information structural units, topics, and foci were closely synchronized with gesture phrase medial stroke + post-hold combinations (i.e., apical areas). In addition, iconic and metaphoric gestures were more likely to be paired with foci, and deictics with topics. Overall, the results confirm synchronization of gesture and prosody at the phrasal level and provide evidence that gesture shows a direct sensitivity to information structure. These show that speech and gesture production are more connected than assumed in existing production models.
Collapse
Affiliation(s)
| | - Sasha Calhoun
- Te Herenga Waka - Victoria University of Wellington, New Zealand
| |
Collapse
|
5
|
Aydin C, Göksun T, Otenen E, Tanis SB, Şentürk YD. The role of gestures in autobiographical memory. PLoS One 2023; 18:e0281748. [PMID: 36827254 PMCID: PMC9955584 DOI: 10.1371/journal.pone.0281748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 01/31/2023] [Indexed: 02/25/2023] Open
Abstract
Speakers employ co-speech gestures when thinking and speaking; however, gesture's role in autobiographical episodic representations is not known. Based on the gesture-for-conceptualization framework, we propose that gestures, particularly representational ones, support episodic event representations by activating existing episodic elements and causing new ones to be formed in the autobiographical recollections. These gestures may also undertake information-chunking roles to allow for further processing during remembering, such as a sense of recollective experience. Participants (N = 41) verbally narrated three events (a past autobiographical, a future autobiographical, and a non-autobiographical event) and then rated their phenomenological characteristics. We found that, even though gesture use was not different across the three event conditions, representational gestures were positively associated with the episodic event details as well as their recollective quality within the past autobiographical event narratives. These associations were not observed in future event narrations. These findings suggest that gestures are potentially instrumental in the retrieval of details in autobiographical memories.
Collapse
Affiliation(s)
- Cagla Aydin
- Department of Psychology, Sabancı University, Istanbul, Turkey
- * E-mail:
| | - Tilbe Göksun
- Department of Psychology, Koç University, Istanbul, Turkey
| | - Ege Otenen
- Department of Psychology, Sabancı University, Istanbul, Turkey
| | | | | |
Collapse
|
6
|
Perceiving Assertiveness and Anger from Gesturing Speed in Different Contexts. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00418-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Ma S, Jin G. The relationship between different types of co-speech gestures and L2 speech performance. Front Psychol 2022; 13:941114. [PMID: 36051215 PMCID: PMC9424915 DOI: 10.3389/fpsyg.2022.941114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
Co-speech gestures are closely connected to speech, but little attention has been paid to the associations between gesture and L2 speech performance. This study explored the associations between four types of co-speech gestures (namely, iconics, metaphorics, deictics, and beats) and the meaning, form, and discourse dimensions of L2 speech performance. Gesture and speech data were collected by asking 61 lower-intermediate English learners whose first language is Chinese to retell a cartoon clip. Results showed that all the four types of co-speech gestures had positive associations with meaning and discourse L2 speech measures but no association with form-related speech measures, except the positive association between metaphorics and the percentage of error-free clauses. The findings suggest that co-speech gestures may have a tighter connection with meaning construction in producing L2 speech.
Collapse
Affiliation(s)
- Sai Ma
- Department of English Education, College of Foreign Languages, Capital Normal University, Beijing, China
- Sai Ma
| | - Guangsa Jin
- Department of Linguistics, School of International Studies, University of International Business and Economics, Beijing, China
- *Correspondence: Guangsa Jin
| |
Collapse
|
8
|
Perniss P, Vinson D, Vigliocco G. Making Sense of the Hands and Mouth: The Role of "Secondary" Cues to Meaning in British Sign Language and English. Cogn Sci 2021; 44:e12868. [PMID: 32619055 DOI: 10.1111/cogs.12868] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 05/01/2020] [Accepted: 05/06/2020] [Indexed: 01/06/2023]
Abstract
Successful face-to-face communication involves multiple channels, notably hand gestures in addition to speech for spoken language, and mouth patterns in addition to manual signs for sign language. In four experiments, we assess the extent to which comprehenders of British Sign Language (BSL) and English rely, respectively, on cues from the hands and the mouth in accessing meaning. We created congruent and incongruent combinations of BSL manual signs and mouthings and English speech and gesture by video manipulation and asked participants to carry out a picture-matching task. When participants were instructed to pay attention only to the primary channel, incongruent "secondary" cues still affected performance, showing that these are reliably used for comprehension. When both cues were relevant, the languages diverged: Hand gestures continued to be used in English, but mouth movements did not in BSL. Moreover, non-fluent speakers and signers varied in the use of these cues: Gestures were found to be more important for non-native than native speakers; mouth movements were found to be less important for non-fluent signers. We discuss the results in terms of the information provided by different communicative channels, which combine to provide meaningful information.
Collapse
Affiliation(s)
| | - David Vinson
- Division of Psychology and Language Sciences, University College London
| | | |
Collapse
|
9
|
Zhang Y, Frassinelli D, Tuomainen J, Skipper JI, Vigliocco G. More than words: word predictability, prosody, gesture and mouth movements in natural language comprehension. Proc Biol Sci 2021; 288:20210500. [PMID: 34284631 PMCID: PMC8292779 DOI: 10.1098/rspb.2021.0500] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/28/2021] [Indexed: 12/27/2022] Open
Abstract
The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.
Collapse
Affiliation(s)
- Ye Zhang
- Experimental Psychology, University College London, London, UK
| | - Diego Frassinelli
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Jyrki Tuomainen
- Experimental Psychology, Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | | | | |
Collapse
|
10
|
Vilà-Giménez I, Dowling N, Demir-Lira ÖE, Prieto P, Goldin-Meadow S. The Predictive Value of Non-Referential Beat Gestures: Early Use in Parent-Child Interactions Predicts Narrative Abilities at 5 Years of Age. Child Dev 2021; 92:2335-2355. [PMID: 34018614 DOI: 10.1111/cdev.13583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
A longitudinal study with 45 children (Hispanic, 13%; non-Hispanic, 87%) investigated whether the early production of non-referential beat and flip gestures, as opposed to referential iconic gestures, in parent-child naturalistic interactions from 14 to 58 months old predicts narrative abilities at age 5. Results revealed that only non-referential beats significantly (p < .01) predicted later narrative productions. The pragmatic functions of the children's speech that accompany these gestures were also analyzed in a representative sample of 18 parent-child dyads, revealing that beats were typically associated with biased assertions or questions. These findings show that the early use of beats predicts narrative abilities later in development, and suggest that this relation is likely due to the pragmatic-structuring function that beats reflect in early discourse.
Collapse
Affiliation(s)
| | | | - Ö Ece Demir-Lira
- University of Iowa, DeLTA Center and Iowa Neuroscience Institute
| | - Pilar Prieto
- Institució Catalana de Recerca i Estudis Avançats (ICREA) and Universitat Pompeu Fabra
| | | |
Collapse
|
11
|
Vilà-Giménez I, Prieto P. The Value of Non-Referential Gestures: A Systematic Review of Their Cognitive and Linguistic Effects in Children's Language Development. CHILDREN (BASEL, SWITZERLAND) 2021; 8:148. [PMID: 33671119 PMCID: PMC7922730 DOI: 10.3390/children8020148] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 02/10/2021] [Accepted: 02/11/2021] [Indexed: 12/04/2022]
Abstract
Speakers produce both referential gestures, which depict properties of a referent, and non-referential gestures, which lack semantic content. While a large number of studies have demonstrated the cognitive and linguistic benefits of referential gestures as well as their precursor and predictive role in both typically developing (TD) and non-TD children, less is known about non-referential gestures in cognitive and complex linguistic domains, such as narrative development. This paper is a systematic review and narrative synthesis of the research concerned with assessing the effects of non-referential gestures in such domains. A search of the literature turned up 11 studies, collectively involving 898 2- to 8-year-old TD children. Although they yielded contradictory evidence, pointing to the need for further investigations, the results of the six studies-in which experimental tasks and materials were pragmatically based-revealed that non-referential gestures not only enhance information recall and narrative comprehension but also act as predictors and causal mechanisms for narrative performance. This suggests that their bootstrapping role in language development is due to the fact that they have important discourse-pragmatic functions that help frame discourse. These findings should be of particular interest to teachers and future studies could extend their impact to non-TD children.
Collapse
Affiliation(s)
- Ingrid Vilà-Giménez
- Department of Translation and Language Sciences, Universitat Pompeu Fabra, 08018 Barcelona, Spain;
- Department of Subject-Specific Education, Universitat de Girona, 17004 Girona, Spain
| | - Pilar Prieto
- Department of Translation and Language Sciences, Universitat Pompeu Fabra, 08018 Barcelona, Spain;
- Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona, Spain
| |
Collapse
|
12
|
Morett LM, Landi N, Irwin J, McPartland JC. N400 amplitude, latency, and variability reflect temporal integration of beat gesture and pitch accent during language processing. Brain Res 2020; 1747:147059. [PMID: 32818527 PMCID: PMC7493208 DOI: 10.1016/j.brainres.2020.147059] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 08/03/2020] [Accepted: 08/12/2020] [Indexed: 01/19/2023]
Abstract
This study examines how across-trial (average) and trial-by-trial (variability in) amplitude and latency of the N400 event-related potential (ERP) reflect temporal integration of pitch accent and beat gesture. Thirty native English speakers viewed videos of a talker producing sentences with beat gesture co-occurring with a pitch accented focus word (synchronous), beat gesture co-occurring with the onset of a subsequent non-focused word (asynchronous), or the absence of beat gesture (no beat). Across trials, increased amplitude and earlier latency were observed when beat gesture was temporally asynchronous with pitch accenting than when it was temporally synchronous with pitch accenting or absent. Moreover, temporal asynchrony of beat gesture relative to pitch accent increased trial-by-trial variability of N400 amplitude and latency and influenced the relationship between across-trial and trial-by-trial N400 latency. These results indicate that across-trial and trial-by-trial amplitude and latency of the N400 ERP reflect temporal integration of beat gesture and pitch accent during language comprehension, supporting extension of the integrated systems hypothesis of gesture-speech processing and neural noise theories to focus processing in typical adult populations.
Collapse
Affiliation(s)
| | - Nicole Landi
- Haskins Laboratories, University of Connecticut, United States
| | - Julia Irwin
- Haskins Laboratories, Southern Connecticut State University, United States
| | | |
Collapse
|
13
|
Rohrer PL, Delais-Roussarie E, Prieto P. Beat Gestures for Comprehension and Recall: Differential Effects of Language Learners and Native Listeners. Front Psychol 2020; 11:575929. [PMID: 33192882 PMCID: PMC7605175 DOI: 10.3389/fpsyg.2020.575929] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 09/28/2020] [Indexed: 11/13/2022] Open
Abstract
Previous work has shown how native listeners benefit from observing iconic gestures during speech comprehension tasks of both degraded and non-degraded speech. By contrast, effects of the use of gestures in non-native listener populations are less clear and studies have mostly involved iconic gestures. The current study aims to complement these findings by testing the potential beneficial effects of beat gestures (non-referential gestures which are often used for information- and discourse marking) on language recall and discourse comprehension using a narrative-drawing task carried out by native and non-native listeners. Using a within-subject design, 51 French intermediate learners of English participated in a narrative-drawing task. Each participant was assigned 8 videos to watch, where a native speaker describes the events of a short comic strip. Videos were presented in random order, in four conditions: in Native listening conditions with frequent, naturally-modeled beat gestures, in Native listening conditions without any gesture, in Non-native listening conditions with frequent, naturally-modeled beat gestures, and in Non-native listening conditions without any gesture. Participants watched each video twice and then immediately recreated the comic strip through their own drawings. Participants' drawings were then evaluated for discourse comprehension (via their ability to convey the main goals of the narrative through their drawings) and recall (via the number of gesturally-marked elements in the narration that were included in their drawings). Results showed that for native listeners, beat gestures had no significant effect on either recall or comprehension. In non-native speech, however, beat gestures led to significantly lower comprehension and recall scores. These results suggest that frequent, naturally-modeled beat gestures in longer discourses may increase cognitive load for language learners, resulting in negative effects on both memory and language understanding. These findings add to the growing body of literature that suggests that gesture benefits are not a "one-size-fits-all" solution, but rather may be contingent on factors such as language proficiency and gesture rate, particularly in that whenever beat gestures are repeatedly used in discourse, they inherently lose their saliency as markers of important information.
Collapse
Affiliation(s)
- Patrick Louis Rohrer
- Université de Nantes, UMR 6310, Laboratoire de Linguistique de Nantes (LLING), Nantes, France
- Grup d’Estudis de Prosòdia, Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain
| | | | - Pilar Prieto
- Grup d’Estudis de Prosòdia, Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
14
|
Vilà‐Giménez I, Prieto P. Encouraging kids to beat: Children's beat gesture production boosts their narrative performance. Dev Sci 2020; 23:e12967. [DOI: 10.1111/desc.12967] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 03/06/2020] [Accepted: 03/12/2020] [Indexed: 11/30/2022]
Affiliation(s)
- Ingrid Vilà‐Giménez
- Department of Translation and Language Sciences Universitat Pompeu Fabra Barcelona Catalonia Spain
| | - Pilar Prieto
- Department of Translation and Language Sciences Universitat Pompeu Fabra Barcelona Catalonia Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA) Barcelona Catalonia Spain
| |
Collapse
|
15
|
The facilitative effect of gestures on the neural processing of semantic complexity in a continuous narrative. Neuroimage 2019; 195:38-47. [DOI: 10.1016/j.neuroimage.2019.03.054] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 03/25/2019] [Indexed: 11/19/2022] Open
|
16
|
Llanes-Coromina J, Vilà-Giménez I, Kushch O, Borràs-Comes J, Prieto P. Beat gestures help preschoolers recall and comprehend discourse information. J Exp Child Psychol 2018; 172:168-188. [DOI: 10.1016/j.jecp.2018.02.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 02/12/2018] [Accepted: 02/17/2018] [Indexed: 01/29/2023]
|
17
|
Drijvers L, Özyürek A. Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. BRAIN AND LANGUAGE 2018; 177-178:7-17. [PMID: 29421272 DOI: 10.1016/j.bandl.2018.01.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Revised: 01/05/2018] [Accepted: 01/15/2018] [Indexed: 06/08/2023]
Abstract
Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
Collapse
Affiliation(s)
- Linda Drijvers
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands.
| | - Asli Özyürek
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|