1
|
Titus A, Peeters D. Multilingualism at the Market: A Pre-registered Immersive Virtual Reality Study of Bilingual Language Switching. J Cogn 2024; 7:35. [PMID: 38638461 PMCID: PMC11025569 DOI: 10.5334/joc.359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 03/26/2024] [Indexed: 04/20/2024] Open
Abstract
Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals' capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory.
Collapse
Affiliation(s)
- Alex Titus
- Radboud University, Centre for Language Studies, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, the Netherlands
| |
Collapse
|
2
|
Raghavan R, Raviv L, Peeters D. What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition 2023; 240:105581. [PMID: 37573692 DOI: 10.1016/j.cognition.2023.105581] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 07/03/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
Collapse
Affiliation(s)
- Renuka Raghavan
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands
| | - Limor Raviv
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Social, Cognitive and Affective Neuroscience (cSCAN), University of Glasgow, United Kingdom
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, The Netherlands.
| |
Collapse
|
3
|
Holler J. Visual bodily signals as core devices for coordinating minds in interaction. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210094. [PMID: 35876208 PMCID: PMC9310176 DOI: 10.1098/rstb.2021.0094] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Judith Holler
- Max-Planck-Institut für Psycholinguistik, Nijmegen, The Netherlands
- Donders Centre for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Liu(刘) R, Bögels S, Bird G, Medendorp WP, Toni I. Hierarchical Integration of Communicative and Spatial Perspective‐Taking Demands in Sensorimotor Control of Referential Pointing. Cogn Sci 2022; 46:e13084. [PMID: 35066907 PMCID: PMC9287027 DOI: 10.1111/cogs.13084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 10/29/2021] [Accepted: 12/07/2021] [Indexed: 11/16/2022]
Abstract
Recognized as a simple communicative behavior, referential pointing is cognitively complex because it invites a communicator to consider an addressee's knowledge. Although we know referential pointing is affected by addressees’ physical location, it remains unclear whether and how communicators’ inferences about addressees’ mental representation of the interaction space influence sensorimotor control of referential pointing. The communicative perspective‐taking task requires a communicator to point at one out of multiple referents either to instruct an addressee which one should be selected (communicative, COM) or to predict which one the addressee will select (non‐communicative, NCOM), based on either which referents can be seen (Level‐1 perspective‐taking, PT1) or how the referents were perceived (Level‐2 perspective‐taking, PT2) by the addressee. Communicators took longer to initiate the movements in PT2 than PT1 trials, and they held their pointing fingers for longer at the referent in COM than NCOM trials. The novel findings of this study pertain to trajectory control of the pointing movements. Increasing both communicative and perspective‐taking demands led to longer pointing trajectories, with an under‐additive interaction between those two experimental factors. This finding suggests that participants generate communicative behaviors that are as informative as required rather than overly exaggerated displays, by integrating communicative and perspective‐taking information hierarchically during sensorimotor control. This observation has consequences for models of human communication. It implies that the format of communicative and perspective‐taking knowledge needs to be commensurate with the movement dynamics controlled by the sensorimotor system.
Collapse
Affiliation(s)
- Rui(睿) Liu(刘)
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| | - Sara Bögels
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| | - Geoffrey Bird
- Department of Experimental Psychology University of Oxford
- Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology & Neuroscience King's College London
| | | | - Ivan Toni
- Donders Institute for Brain, Cognition and Behaviour Radboud University
| |
Collapse
|
5
|
Abstract
Language allows us to efficiently communicate about the things in the world around us. Seemingly simple words like this and that are a cornerstone of our capability to refer, as they contribute to guiding the attention of our addressee to the specific entity we are talking about. Such demonstratives are acquired early in life, ubiquitous in everyday talk, often closely tied to our gestural communicative abilities, and present in all spoken languages of the world. Based on a review of recent experimental work, here we introduce a new conceptual framework of demonstrative reference. In the context of this framework, we argue that several physical, psychological, and referent-intrinsic factors dynamically interact to influence whether a speaker will use one demonstrative form (e.g., this) or another (e.g., that) in a given setting. However, the relative influence of these factors themselves is argued to be a function of the cultural language setting at hand, the theory-of-mind capacities of the speaker, and the affordances of the specific context in which the speech event takes place. It is demonstrated that the framework has the potential to reconcile findings in the literature that previously seemed irreconcilable. We show that the framework may to a large extent generalize to instances of endophoric reference (e.g., anaphora) and speculate that it may also describe the specific form and kinematics a speaker's pointing gesture takes. Testable predictions and novel research questions derived from the framework are presented and discussed.
Collapse
Affiliation(s)
- David Peeters
- Department of Communication and Cognition, TiCC, Tilburg University, P.O. Box 90153, NL-5000 LE, Tilburg, The Netherlands.
| | - Emiel Krahmer
- Department of Communication and Cognition, TiCC, Tilburg University, P.O. Box 90153, NL-5000 LE, Tilburg, The Netherlands
| | - Alfons Maes
- Department of Communication and Cognition, TiCC, Tilburg University, P.O. Box 90153, NL-5000 LE, Tilburg, The Netherlands
| |
Collapse
|
6
|
Mesh K, Cruz E, van de Weijer J, Burenhult N, Gullberg M. Effects of Scale on Multimodal Deixis: Evidence From Quiahije Chatino. Front Psychol 2021; 11:584231. [PMID: 33510669 PMCID: PMC7835423 DOI: 10.3389/fpsyg.2020.584231] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 10/26/2020] [Indexed: 12/03/2022] Open
Abstract
As humans interact in the world, they often orient one another's attention to objects through the use of spoken demonstrative expressions and head and/or hand movements to point to the objects. Although indicating behaviors have frequently been studied in lab settings, we know surprisingly little about how demonstratives and pointing are used to coordinate attention in large-scale space and in natural contexts. This study investigates how speakers of Quiahije Chatino, an indigenous language of Mexico, use demonstratives and pointing to give directions to named places in large-scale space across multiple scales (local activity, district, state). The results show that the use and coordination of demonstratives and pointing change as the scale of search space for the target grows. At larger scales, demonstratives and pointing are more likely to occur together, and the two signals appear to manage different aspects of the search for the target: demonstratives orient attention primarily to the gesturing body, while pointing provides cues for narrowing the search space. These findings underscore the distinct contributions of speech and gesture to the linguistic composite, while illustrating the dynamic nature of their interplay. Abstracts in Spanish and Quiahije Chatino are provided as appendices. Se incluyen como apéndices resúmenes en español y en el chatino de San Juan Quiahije. SonG ktyiC reC inH, ngyaqC skaE ktyiC noE ndaH sonB naF ngaJ noI ngyaqC loE ktyiC reC, ngyaqC ranF chaqE xlyaK qoE chaqF jnyaJ noA ndywiqA renqA KchinA KyqyaC.
Collapse
Affiliation(s)
- Kate Mesh
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Emiliana Cruz
- Department of Anthropology, Centro de Investigaciones y Estudios Superiores en Antropología Social (CIESAS-CDMX), Mexico City, Mexico
| | | | - Niclas Burenhult
- Lund University Humanities Lab, Lund University, Lund, Sweden.,Centre for Languages and Literature, Lund University, Lund, Sweden
| | - Marianne Gullberg
- Lund University Humanities Lab, Lund University, Lund, Sweden.,Centre for Languages and Literature, Lund University, Lund, Sweden
| |
Collapse
|
7
|
Abstract
Beat gestures-spontaneously produced biphasic movements of the hand-are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple 'flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.
Collapse
Affiliation(s)
- Hans Rutger Bosker
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500 AH Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - David Peeters
- Department of Communication and Cognition, TiCC Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
8
|
Sparrow K, Lind C, van Steenbrugge W. Gesture, communication, and adult acquired hearing loss. JOURNAL OF COMMUNICATION DISORDERS 2020; 87:106030. [PMID: 32707420 DOI: 10.1016/j.jcomdis.2020.106030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 06/18/2020] [Accepted: 06/19/2020] [Indexed: 06/11/2023]
Abstract
Nonverbal communication, specifically hand and arm movements (commonly known as gesture), has long been recognized and explored as a significant element in human interaction as well as potential compensatory behavior for individuals with communication difficulties. The use of gesture as a compensatory communication method in expressive and receptive human communication disorders has been the subject of much investigation. Yet within the context of adult acquired hearing loss, gesture has received limited research attention and much remains unknown about patterns of nonverbal behaviors in conversations in which hearing loss is a factor. This paper presents key elements of the background of gesture studies and the theories of gesture function and production followed by a review of research focused on adults with hearing loss and the role of gesture and gaze in rehabilitation. The current examination of the visual resource of co-speech gesture in the context of everyday interactions involving adults with acquired hearing loss suggests the need for the development of an evidence base to effect enhancements and changes in the way in which rehabilitation services are conducted.
Collapse
Affiliation(s)
- Karen Sparrow
- Audiology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| | - Christopher Lind
- Audiology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| | - Willem van Steenbrugge
- Speech Pathology, College of Nursing & Health Sciences, Flinders University, GPO Box 2100, Adelaide, 5001, South Australia, Australia.
| |
Collapse
|
9
|
Vesper C, Sevdalis V. Informing, Coordinating, and Performing: A Perspective on Functions of Sensorimotor Communication. Front Hum Neurosci 2020; 14:168. [PMID: 32528263 PMCID: PMC7264104 DOI: 10.3389/fnhum.2020.00168] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Accepted: 04/20/2020] [Indexed: 11/29/2022] Open
Abstract
Sensorimotor communication is a form of communication instantiated through body movements that are guided by both instrumental, goal-directed intentions and communicative, social intentions. Depending on the social interaction context, sensorimotor communication can serve different functions. This article aims to disentangle three of these functions: (a) an informing function of body movements, to highlight action intentions for an observer; (b) a coordinating function of body movements, to facilitate real-time action prediction in joint action; and (c) a performing function of body movements, to elicit emotional or aesthetic experiences in an audience. We provide examples of research addressing these different functions as well as some influencing factors, relating to individual differences, task characteristics, and situational demands. The article concludes by discussing the benefits of a closer dialog between separate lines of research on sensorimotor communication across different social contexts.
Collapse
Affiliation(s)
- Cordula Vesper
- Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Vassilis Sevdalis
- Department of Public Health, Sport Science, Aarhus University, Aarhus, Denmark
| |
Collapse
|
10
|
Macuch Silva V, Holler J, Ozyurek A, Roberts SG. Multimodality and the origin of a novel communication system in face-to-face interaction. ROYAL SOCIETY OPEN SCIENCE 2020; 7:182056. [PMID: 32218922 PMCID: PMC7029942 DOI: 10.1098/rsos.182056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 11/27/2019] [Indexed: 05/05/2023]
Abstract
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
Collapse
Affiliation(s)
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Asli Ozyurek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Seán G. Roberts
- Department of Archaeology and Anthropology (excd.lab), University of Bristol, Bristol, UK
| |
Collapse
|
11
|
Winner T, Selen L, Murillo Oosterwijk A, Verhagen L, Medendorp WP, van Rooij I, Toni I. Recipient Design in Communicative Pointing. Cogn Sci 2019; 43:e12733. [PMID: 31087589 PMCID: PMC6594194 DOI: 10.1111/cogs.12733] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 02/05/2019] [Accepted: 03/28/2019] [Indexed: 11/18/2022]
Abstract
A long‐standing debate in the study of human communication centers on the degree to which communicators tune their communicative signals (e.g., speech, gestures) for specific addressees, as opposed to taking a neutral or egocentric perspective. This tuning, called recipient design, is known to occur under special conditions (e.g., when errors in communication need to be corrected), but several researchers have argued that it is not an intrinsic feature of human communication, because that would be computationally too demanding. In this study, we contribute to this debate by studying a simple communicative behavior, communicative pointing, under conditions of successful (error‐free) communication. Using an information‐theoretic measure, called legibility, we present evidence of recipient design in communicative pointing. The legibility effect is present early in the movement, suggesting that it is an intrinsic part of the communicative plan. Moreover, it is reliable only from the viewpoint of the addressee, suggesting that the motor plan is tuned to the addressee. These findings suggest that recipient design is an intrinsic feature of human communication.
Collapse
Affiliation(s)
- Tobias Winner
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Luc Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Anke Murillo Oosterwijk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University.,Erasmus Research Institute of Management, Erasmus University Rotterdam
| | | | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Iris van Rooij
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Ivan Toni
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| |
Collapse
|
12
|
Trujillo JP, Simanova I, Bekkering H, Özyürek A. Communicative intent modulates production and comprehension of actions and gestures: A Kinect study. Cognition 2018; 180:38-51. [PMID: 29981967 DOI: 10.1016/j.cognition.2018.04.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Revised: 03/16/2018] [Accepted: 04/02/2018] [Indexed: 10/28/2022]
Abstract
Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension. We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative. Our study showed that during production the communicative context modulates space-time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable. Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
Collapse
Affiliation(s)
- James P Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands; Centre for Language Studies, Radboud University Nijmegen, The Netherlands.
| | - Irina Simanova
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Harold Bekkering
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
13
|
Communicative knowledge pervasively influences sensorimotor computations. Sci Rep 2017; 7:4268. [PMID: 28655870 PMCID: PMC5487354 DOI: 10.1038/s41598-017-04442-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 05/16/2017] [Indexed: 11/08/2022] Open
Abstract
Referential pointing is a characteristically human behavior, which involves moving a finger through space to direct an addressee towards a desired mental state. Planning this type of action requires an interface between sensorimotor and conceptual abilities. A simple interface could supplement spatially-guided motor routines with communicative-ostensive cues. For instance, a pointing finger held still for an extended period of time could aid the addressee’s understanding, without altering the movement’s trajectory. A more complex interface would entail communicative knowledge penetrating the sensorimotor system and directly affecting pointing trajectories. We compare these two possibilities using motion analyses of referential pointing during multi-agent interactions. We observed that communicators produced ostensive cues that were sensitive to the communicative context. Crucially, we also observed pervasive adaptations to the pointing trajectories: they were tailored to the communicative context and to partner-specific information. These findings indicate that human referential pointing is planned and controlled on the basis of partner-specific knowledge, over and above the tagging of motor routines with ostensive cues.
Collapse
|
14
|
Holler J, Bavelas J. Chapter 10. Multi-modal communication of common ground. GESTURE STUDIES 2017. [DOI: 10.1075/gs.7.11hol] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
15
|
|
16
|
Gunter TC, Weinbrenner JED. When to Take a Gesture Seriously: On How We Use and Prioritize Communicative Cues. J Cogn Neurosci 2017; 29:1355-1367. [PMID: 28358659 DOI: 10.1162/jocn_a_01125] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
When people talk, their speech is often accompanied by gestures. Although it is known that co-speech gestures can influence face-to-face communication, it is currently unclear to what extent they are actively used and under which premises they are prioritized to facilitate communication. We investigated these open questions in two experiments that varied how pointing gestures disambiguate the utterances of an interlocutor. Participants, whose event-related brain responses were measured, watched a video, where an actress was interviewed about, for instance, classical literature (e.g., Goethe and Shakespeare). While responding, the actress pointed systematically to the left side to refer to, for example, Goethe, or to the right to refer to Shakespeare. Her final statement was ambiguous and combined with a pointing gesture. The P600 pattern found in Experiment 1 revealed that, when pointing was unreliable, gestures were only monitored for their cue validity and not used for reference tracking related to the ambiguity. However, when pointing was a valid cue (Experiment 2), it was used for reference tracking, as indicated by a reduced N400 for pointing. In summary, these findings suggest that a general prioritization mechanism is in use that constantly monitors and evaluates the use of communicative cues against communicative priors on the basis of accumulated error information.
Collapse
Affiliation(s)
- Thomas C Gunter
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | | |
Collapse
|
17
|
Peeters D, Snijders TM, Hagoort P, Özyürek A. Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues. Neuropsychologia 2017; 95:21-29. [DOI: 10.1016/j.neuropsychologia.2016.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 11/25/2016] [Accepted: 12/05/2016] [Indexed: 10/20/2022]
|
18
|
Herbort O, Kunde W. How to point and to interpret pointing gestures? Instructions can reduce pointer-observer misunderstandings. PSYCHOLOGICAL RESEARCH 2016; 82:395-406. [PMID: 27832377 DOI: 10.1007/s00426-016-0824-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 11/01/2016] [Indexed: 10/20/2022]
Abstract
In everyday communication, people often point. However, a pointing act is often misinterpreted as indicating a different spatial referent position than intended by the pointer. It has been suggested that this happens because pointers put the tip of the index finger close to the line joining the eye to the referent. However, the person interpreting the pointing act extrapolates the vector defined by the arm and index finger. As this line crosses the eye-referent line, it suggests a different referent position than the one that was meant. In this paper, we test this hypothesis by manipulating the geometry underlying the production and interpretation of pointing gestures. In Experiment 1, we compared naïve pointer-observed dyads with dyads in which the discrepancy between the vectors defining the production and interpretation of pointing acts has been reduced. As predicted, this reduced pointer-observer misunderstandings compared to the naïve control group. In Experiment 2, we tested whether pointers elevate their arms steeper than necessary to orient it toward the referent, because they visually steer their index finger tips onto the referents in their visual field. Misunderstandings between pointers and observers were smaller when pointers pointed without visual feedback. In sum, the results support the hypothesis that misunderstandings between (naïve) pointers and observers result from different spatial rules describing the production and interpretation of pointing gestures. Furthermore, we suggest that instructions that reduce the discrepancy between these spatial rules can improve communicating with pointing gestures.
Collapse
Affiliation(s)
- Oliver Herbort
- Department of Psychology, Julius-Maximilians-Universität Würzburg, Röntgenring 11, 97070, Würzburg, Germany.
| | - Wilfried Kunde
- Department of Psychology, Julius-Maximilians-Universität Würzburg, Röntgenring 11, 97070, Würzburg, Germany
| |
Collapse
|
19
|
McGillion M, Herbert JS, Pine J, Vihman M, dePaolis R, Keren-Portnoy T, Matthews D. What Paves the Way to Conventional Language? The Predictive Value of Babble, Pointing, and Socioeconomic Status. Child Dev 2016; 88:156-166. [DOI: 10.1111/cdev.12671] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Julian Pine
- University of Liverpool
- ESRC International Centre for Language and Communicative Development (LuCiD)
| | | | | | | | | |
Collapse
|
20
|
Peeters D, Özyürek A. This and That Revisited: A Social and Multimodal Approach to Spatial Demonstratives. Front Psychol 2016; 7:222. [PMID: 26909066 PMCID: PMC4754391 DOI: 10.3389/fpsyg.2016.00222] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Accepted: 02/03/2016] [Indexed: 11/17/2022] Open
Affiliation(s)
- David Peeters
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands
| | - Aslı Özyürek
- Neurobiology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands; Centre for Language Studies, Radboud UniversityNijmegen, Netherlands
| |
Collapse
|