1
|
Ter Bekke M, Levinson SC, van Otterdijk L, Kühn M, Holler J. Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition 2024; 248:105806. [PMID: 38749291 DOI: 10.1016/j.cognition.2024.105806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 03/04/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024]
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | | | - Lina van Otterdijk
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Michelle Kühn
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| |
Collapse
|
2
|
Eleuteri V, Bates L, Rendle-Worthington J, Hobaiter C, Stoeger A. Multimodal communication and audience directedness in the greeting behaviour of semi-captive African savannah elephants. Commun Biol 2024; 7:472. [PMID: 38724671 PMCID: PMC11082179 DOI: 10.1038/s42003-024-06133-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 04/02/2024] [Indexed: 05/12/2024] Open
Abstract
Many species communicate by combining signals into multimodal combinations. Elephants live in multi-level societies where individuals regularly separate and reunite. Upon reunion, elephants often engage in elaborate greeting rituals, where they use vocalisations and body acts produced with different body parts and of various sensory modalities (e.g., audible, tactile). However, whether these body acts represent communicative gestures and whether elephants combine vocalisations and gestures during greeting is still unknown. Here we use separation-reunion events to explore the greeting behaviour of semi-captive elephants (Loxodonta africana). We investigate whether elephants use silent-visual, audible, and tactile gestures directing them at their audience based on their state of visual attention and how they combine these gestures with vocalisations during greeting. We show that elephants select gesture modality appropriately according to their audience's visual attention, suggesting evidence of first-order intentional communicative use. We further show that elephants integrate vocalisations and gestures into different combinations and orders. The most frequent combination consists of rumble vocalisations with ear-flapping gestures, used most often between females. By showing that a species evolutionarily distant to our own primate lineage shows sensitivity to their audience's visual attention in their gesturing and combines gestures with vocalisations, our study advances our understanding of the emergence of first-order intentionality and multimodal communication across taxa.
Collapse
Affiliation(s)
- Vesta Eleuteri
- Department of Behavioral & Cognitive Biology, University of Vienna, Vienna, Austria.
| | - Lucy Bates
- Department of Psychology, University of Portsmouth, Portsmouth, UK
| | | | - Catherine Hobaiter
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | - Angela Stoeger
- Department of Behavioral & Cognitive Biology, University of Vienna, Vienna, Austria.
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria.
| |
Collapse
|
3
|
Qirko H. Pace setting as an adaptive precursor of rhythmic musicality. Ann N Y Acad Sci 2024; 1533:5-15. [PMID: 38412090 DOI: 10.1111/nyas.15120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
Human musicality (the capacity to make and appreciate music) is difficult to explain in evolutionary terms, though many theories attempt to do so. This paper focuses on musicality's potential adaptive precursors, particularly as related to rhythm. It suggests that pace setting for walking and running long distances over extended time periods (endurance locomotion, EL) is a good candidate for an adaptive building block of rhythmic musicality. The argument is as follows: (1) over time, our hominin lineage developed a host of adaptations for efficient EL; (2) the ability to set and maintain a regular pace was a crucial adaptation in the service of EL, providing proximate rewards for successful execution; (3) maintaining a pace in EL occasioned hearing, feeling, and attending to regular rhythmic patterns; (4) these rhythmic patterns, as well as proximate rewards for maintaining them, became disassociated from locomotion and entrained in new proto-musical contexts. Support for the model and possibilities for generating predictions to test it are discussed.
Collapse
Affiliation(s)
- Hector Qirko
- Department of Sociology and Anthropology, College of Charleston, Charleston, South Carolina, USA
| |
Collapse
|
4
|
van Boekholt B, Wilkinson R, Pika S. Bodies at play: the role of intercorporeality and bodily affordances in coordinating social play in chimpanzees in the wild. Front Psychol 2024; 14:1206497. [PMID: 38292528 PMCID: PMC10826840 DOI: 10.3389/fpsyg.2023.1206497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 12/13/2023] [Indexed: 02/01/2024] Open
Abstract
The comparative approach is a crucial method to gain a better understanding of the behavior of living human and nonhuman animals to then draw informed inferences about the behavior of extinct ancestors. One focus has been on disentangling the puzzle of language evolution. Traditionally, studies have predominantly focused on intentionally produced signals in communicative interactions. However, in collaborative and highly dynamic interactions such as play, underlying intentionality is difficult to assess and often interactions are negotiated via body movements rather than signals. This "lack" of signals has led to this dynamic context being widely ignored in comparative studies. The aim of this paper is threefold: First, we will show how comparative research into communication can benefit from taking the intentionality-agnostic standpoint used in conversation analysis. Second, we will introduce the concepts of 'intercorporeality' and 'bodily affordance', and show how they can be applied to the analysis of communicative interactions of nonhuman animals. Third, we will use these concepts to investigate how chimpanzees (Pan troglodytes) initiate, end, and maintain 'contact social play'. Our results showed that bodily affordances are able to capture elements of interactions that more traditional approaches failed to describe. Participants made use of bodily affordances to achieve coordinated engagement in contact social play. Additionally, these interactions could display a sequential organization by which one 'move' by a chimpanzee was responded to with an aligning 'move', which allowed for the co-construction of the activity underway. Overall, the present approach innovates on three fronts: First, it allows for the analysis of interactions that are often ignored because they do not fulfil criteria of intentionality, and/or consist of purely body movements. Second, adopting concepts from research on human interaction enables a better comparison of communicative interactions in other animal species without a too narrow focus on intentional signaling only. Third, adopting a stance from interaction research that highlights how practical action can also be communicative, our results show that chimpanzees can communicate through their embodied actions as well as through signaling. With this first step, we hope to inspire new research into dynamic day-to-day interactions involving both "traditional" signals and embodied actions, which, in turn, can provide insights into evolutionary precursors of human language.
Collapse
Affiliation(s)
- Bas van Boekholt
- Comparative BioCognition, Institute of Cognitive Science, Osnabück University, Osnabrück, Germany
| | - Ray Wilkinson
- Division of Human Communication Sciences, School of Allied Health Professions, Nursing and Midwifery, University of Sheffield, Sheffield, United Kingdom
| | - Simone Pika
- Comparative BioCognition, Institute of Cognitive Science, Osnabück University, Osnabrück, Germany
| |
Collapse
|
5
|
Nota N, Trujillo JP, Holler J. Conversational Eyebrow Frowns Facilitate Question Identification: An Online Study Using Virtual Avatars. Cogn Sci 2023; 47:e13392. [PMID: 38058215 DOI: 10.1111/cogs.13392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/11/2023] [Accepted: 11/27/2023] [Indexed: 12/08/2023]
Abstract
Conversation is a time-pressured environment. Recognizing a social action (the ''speech act,'' such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers' intentions.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen
- Max Planck Institute for Psycholinguistics, Nijmegen
| | - James P Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen
- Max Planck Institute for Psycholinguistics, Nijmegen
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen
- Max Planck Institute for Psycholinguistics, Nijmegen
| |
Collapse
|
6
|
Bi X, Cui H, Ma Y. Hyperscanning Studies on Interbrain Synchrony and Child Development: A Narrative Review. Neuroscience 2023; 530:38-45. [PMID: 37657749 DOI: 10.1016/j.neuroscience.2023.08.035] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 08/18/2023] [Accepted: 08/27/2023] [Indexed: 09/03/2023]
Abstract
Social interactions between parents and children are closely linked with children's development, and interbrain synchrony has been shown to be a neural marker of social interaction. However, to truly capture the essence of social interactions through interbrain synchrony, it is necessary to simultaneously discuss the parental and child brains and adequately record neurological signals during parent-child interactions in interactive tasks. In the current review, we have reviewed three main contents. First, we discuss the correlation between parent-child interbrain synchrony and the development of cognitive (e.g., emotion regulation, attention, and learning) and behavioral abilities (e.g., cooperation, problem-solving) in children. Second, we examine the different neural mechanisms of interbrain synchrony in mother-child and father-child interactions, aiming to highlight the separate roles of mother and father in child development. Last, we have integrated four methods to enhance interbrain synchrony, including communication patterns, nonverbal behavior, music, and multichannel stimulation. A significant correlation exists between parent-child interbrain synchrony and the development of children's cognitive and behavioral abilities. This summary may be useful for expanding researchers' and practitioners' understanding of the ways in which parenting and the parent-child relationship shape children' cognitive and behavioral abilities.
Collapse
Affiliation(s)
- Xiaoyan Bi
- School of Education, Guangzhou University, Guangzhou, China; Institution of Science, Chinese Academy of Science, Beijing, China
| | - Hongbo Cui
- School of Education, Guangzhou University, Guangzhou, China
| | - Yankun Ma
- School of Education, Guangzhou University, Guangzhou, China.
| |
Collapse
|
7
|
Verga L, Kotz SA, Ravignani A. The evolution of social timing. Phys Life Rev 2023; 46:131-151. [PMID: 37419011 DOI: 10.1016/j.plrev.2023.06.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
Collapse
Affiliation(s)
- Laura Verga
- Comparative Bioacoustic Group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands; Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Andrea Ravignani
- Comparative Bioacoustic Group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
8
|
Nota N, Trujillo JP, Holler J. Specific facial signals associate with categories of social actions conveyed through questions. PLoS One 2023; 18:e0288104. [PMID: 37467253 DOI: 10.1371/journal.pone.0288104] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 06/20/2023] [Indexed: 07/21/2023] Open
Abstract
The early recognition of fundamental social actions, like questions, is crucial for understanding the speaker's intended message and planning a timely response in conversation. Questions themselves may express more than one social action category (e.g., an information request "What time is it?", an invitation "Will you come to my party?" or a criticism "Are you crazy?"). Although human language use occurs predominantly in a multimodal context, prior research on social actions has mainly focused on the verbal modality. This study breaks new ground by investigating how conversational facial signals may map onto the expression of different types of social actions conveyed through questions. The distribution, timing, and temporal organization of facial signals across social actions was analysed in a rich corpus of naturalistic, dyadic face-to-face Dutch conversations. These social actions were: Information Requests, Understanding Checks, Self-Directed questions, Stance or Sentiment questions, Other-Initiated Repairs, Active Participation questions, questions for Structuring, Initiating or Maintaining Conversation, and Plans and Actions questions. This is the first study to reveal differences in distribution and timing of facial signals across different types of social actions. The findings raise the possibility that facial signals may facilitate social action recognition during language processing in multimodal face-to-face interaction.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - James P Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
9
|
Levinson SC. Gesture, spatial cognition and the evolution of language. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210481. [PMID: 36871589 PMCID: PMC9985965 DOI: 10.1098/rstb.2021.0481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 08/03/2022] [Indexed: 03/07/2023] Open
Abstract
Human communication displays a striking contrast between the diversity of languages and the universality of the principles underlying their use in conversation. Despite the importance of this interactional base, it is not obvious that it heavily imprints the structure of languages. However, a deep-time perspective suggests that early hominin communication was gestural, in line with all the other Hominidae. This gestural phase of early language development seems to have left its traces in the way in which spatial concepts, implemented in the hippocampus, provide organizing principles at the heart of grammar. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Stephen C. Levinson
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525XD, The Netherlands
| |
Collapse
|
10
|
Lameira AR, Moran S. Life of p: A consonant older than speech. Bioessays 2023; 45:e2200246. [PMID: 36811380 DOI: 10.1002/bies.202200246] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 02/24/2023]
Abstract
Which sounds composed the first spoken languages? Archetypal sounds are not phylogenetically or archeologically recoverable, but comparative linguistics and primatology provide an alternative approach. Labial articulations are the most common speech sound, being virtually universal across the world's languages. Of all labials, the plosive 'p' sound, as in 'Pablo Picasso', transcribed /p/, is the most predominant voiceless sound globally and one of the first sounds to emerge in human infant canonical babbling. Global omnipresence and ontogenetic precocity imply that /p/-like sounds could predate the first major linguistic diversification event(s) in humans. Indeed, great ape vocal data support this view, namely, the only cultural sound shared across all great ape genera is articulatorily homologous to a rolling or trilled /p/, the 'raspberry'. /p/-like labial sounds represent an 'articulatory attractor' among living hominids and are likely among the oldest phonological features to have ever emerged in linguistic systems.
Collapse
Affiliation(s)
| | - Steven Moran
- Department of Anthropology, University of Miami, Coral Gables, Florida, USA
- Institute of Biology, University of Neuchatel, Neuchatel, Switzerland
| |
Collapse
|
11
|
Reece A, Cooney G, Bull P, Chung C, Dawson B, Fitzpatrick C, Glazer T, Knox D, Liebscher A, Marin S. The CANDOR corpus: Insights from a large multimodal dataset of naturalistic conversation. SCIENCE ADVANCES 2023; 9:eadf3197. [PMID: 37000886 PMCID: PMC10065445 DOI: 10.1126/sciadv.adf3197] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 03/02/2023] [Indexed: 06/19/2023]
Abstract
People spend a substantial portion of their lives engaged in conversation, and yet, our scientific understanding of conversation is still in its infancy. Here, we introduce a large, novel, and multimodal corpus of 1656 conversations recorded in spoken English. This 7+ million word, 850-hour corpus totals more than 1 terabyte of audio, video, and transcripts, with moment-to-moment measures of vocal, facial, and semantic expression, together with an extensive survey of speakers' postconversation reflections. By taking advantage of the considerable scope of the corpus, we explore many examples of how this large-scale public dataset may catalyze future research, particularly across disciplinary boundaries, as scholars from a variety of fields appear increasingly interested in the study of conversation.
Collapse
Affiliation(s)
| | - Gus Cooney
- University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Peter Bull
- DrivenData Inc., Berkeley, CA, 94709, USA
| | | | | | | | | | - Dean Knox
- University of Pennsylvania, Philadelphia, PA 19104, USA
| | | | | |
Collapse
|
12
|
Markov I, Kharitonova K, Grigorenko EL. Language: Its Origin and Ongoing Evolution. J Intell 2023; 11:jintelligence11040061. [PMID: 37103246 PMCID: PMC10142271 DOI: 10.3390/jintelligence11040061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/17/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
With the present paper, we sought to use research findings to illustrate the following thesis: the evolution of language follows the principles of human evolution. We argued that language does not exist for its own sake, it is one of a multitude of skills that developed to achieve a shared communicative goal, and all its features are reflective of this. Ongoing emerging language adaptations strive to better fit the present state of the human species. Theories of language have evolved from a single-modality to multimodal, from human-specific to usage-based and goal-driven. We proposed that language should be viewed as a multitude of communication techniques that have developed and are developing in response to selective pressure. The precise nature of language is shaped by the needs of the species (arguably, uniquely H. sapiens) utilizing it, and the emergence of new situational adaptations, as well as new forms and types of human language, demonstrates that language includes an act driven by a communicative goal. This article serves as an overview of the current state of psycholinguistic research on the topic of language evolution.
Collapse
Affiliation(s)
- Ilia Markov
- Department of Psychology, University of Houston, Houston, TX 77204, USA
- Texas Institute for Measurement, Evaluation, and Statistics (TIMES), The University of Houston, Houston, TX 77204, USA
- Center for Cognitive Sciences, Sirius University for Science and Technology, Sochi 354340, Russia
| | | | - Elena L. Grigorenko
- Department of Psychology, University of Houston, Houston, TX 77204, USA
- Texas Institute for Measurement, Evaluation, and Statistics (TIMES), The University of Houston, Houston, TX 77204, USA
- Center for Cognitive Sciences, Sirius University for Science and Technology, Sochi 354340, Russia
- Baylor College of Medicine, Houston, TX 77030, USA
- Child Study Center and Haskins Laboratories, Yale University, New Haven, CT 06520, USA
- Rector’s Office, Moscow State University for Psychology and Education, Moscow 127051, Russia
- Correspondence:
| |
Collapse
|
13
|
Pronina M, Grofulovic J, Castillo E, Prieto P, Igualada A. Narrative Abilities at Age 3 Are Associated Positively With Gesture Accuracy but Negatively With Gesture Rate. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:951-965. [PMID: 36763840 DOI: 10.1044/2022_jslhr-21-00414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE Though the frequency of gesture use by infants has been related to the development of different language abilities in the initial stages of language acquisition, less is known about whether this frequency (or "gesture rate") continues to correlate with language measures in later stages of language acquisition, or whether the relation to language skills also depends on the accuracy with which such gestures are produced (or reproduced). This study sets out to explore whether preschoolers' narrative abilities are related to these two variables, namely, gesture rate and gesture accuracy. METHOD A total of 31 typically developing 3- to 4-year-old children participated in a multimodal imitation task, a context-based gesture elicitation task, and a narrative retelling task. RESULTS Results showed that there was a significant positive correlation between the children's narrative scores and their gesture accuracy scores, whereas higher rates of gesture use did not correlate with higher levels of narrative skill. Further multimodal regression analysis confirmed that gesture accuracy was a positive predictor of narrative performance, and moreover, showed that gesture rate was a negative predictor. CONCLUSIONS The fact that both gesture accuracy and gesture rate are strongly and differently linked to oral language abilities supports the claim that language and gesture are highly complex systems, and that complementary measures of gesture performance can help us assess with greater granularity the relationship between gesture and language development. These findings highlight the need to use gesture during clinical assessments as an informative indicator of language development and suggest that future research should further investigate the value of multimodal programs in the treatment of language and communication disorders.
Collapse
Affiliation(s)
- Mariia Pronina
- Department of Translation and Language Sciences, Universitat Pompeu Fabra, Barcelona, Spain
| | | | - Eva Castillo
- Department of Translation and Language Sciences, Universitat Pompeu Fabra, Barcelona, Spain
| | - Pilar Prieto
- Department of Translation and Language Sciences, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| | - Alfonso Igualada
- Faculty of Psychology and Education Sciences, Universitat Oberta de Catalunya, Barcelona, Spain
- Institut Guttmann, Institut Universitari de Neurorehabilitació, Barcelona, Spain
| |
Collapse
|
14
|
Tuomenoksa A, Beeke S, Klippi A. People with aphasia and their family members proposing joint future activities in everyday conversations: A conversation analytic study. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2023; 58:310-325. [PMID: 36204981 DOI: 10.1111/1460-6984.12786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND In everyday conversations, a person with aphasia (PWA) compensates for their language impairment by relying on multimodal and material resources, as well as on their conversation partners. However, some social actions people perform in authentic interaction, proposing a joint future activity, for example, ordinarily rely on a speaker producing a multi-word utterance. Thus, the language impairment connected to aphasia may impede the production of such proposals, consequently hindering the participation of PWAs in the planning of future activities. AIMS To investigate (1) how people with post-stroke chronic aphasia construct proposals of joint future activities in everyday conversations compared with their familiar conversation partners (FCPs); and (2) how aphasia severity impacts on such proposals and their uptake. METHODS & PROCEDURES Ten hours of video-recorded everyday conversations from seven persons with mild and severe aphasia of varying subtypes and their FCPs were explored using conversation analysis. We identified 59 instances where either party proposed a joint future activity and grouped such proposals according to their linguistic format and sequential position. Data are in Finnish. OUTCOMES & RESULTS People with mild aphasia made about the same number of proposals as their FCPs and used similar linguistic formats to their FCPs when proposing joint future activities. This included comparable patterns associated with producing a time reference, which was routinely used when a proposal initiated a planning activity. Mild aphasia manifested itself as within-turn word searches that were typically self-repaired. In contrast, people with severe aphasia made considerably fewer proposals compared with their FCPs, the proposal formats being linguistically unidentifiable. This resulted in delayed acknowledgement of the PWAs' talk as a proposal. CONCLUSIONS & IMPLICATIONS Mild aphasia appears not to impede PWAs' ability to participate in the planning of joint future activities, whereas severe aphasia is a potential limitation. To address this possible participatory barrier, we discuss clinical implications for both therapist-led aphasia treatment and conversation partner training. WHAT THIS PAPER ADDS What is already known on the subject PWAs use multimodal resources to compensate for their language impairment in everyday conversations. However, certain social actions, such as proposing a joint future activity, cannot ordinarily be accomplished without language. What this paper adds to existing knowledge The study demonstrates that proposing joint future activities is a common social action in everyday conversations between PWAs and their family members. People with mild aphasia used typical linguistic proposal formats, and aphasic word-finding problems did not prevent FCPs from understanding the talk as a proposal. People with severe aphasia constructed proposals infrequently using their remaining linguistic resources, a newspaper connecting the talk to the future and the support from FCPs. What are the potential or actual clinical implications of this work? We suggest designing aphasia treatment with reference to the social action of proposing a joint future activity. Therapist-led treatment could model typical linguistic proposal formats, whereas communication partner training could incorporate FCP strategies that scaffold PWAs' opportunities to construct proposals of joint future activities. This would enhance aphasia treatment's ecological validity, promote its generalization and ultimately enable PWAs to participate in everyday planning activities.
Collapse
Affiliation(s)
- Asta Tuomenoksa
- Division of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Suzanne Beeke
- Language & Cognition Research Department, University College London, London, UK
| | - Anu Klippi
- Division of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
15
|
Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:340-353. [PMID: 36823247 PMCID: PMC9949912 DOI: 10.3758/s13415-023-01074-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/25/2023]
Abstract
In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
Collapse
|
16
|
Bodur K, Nikolaus M, Prévot L, Fourtassi A. Using video calls to study children's conversational development: The case of backchannel signaling. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1088752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Understanding children's conversational skills is crucial for understanding their social, cognitive, and linguistic development, with important applications in health and education. To develop theories based on quantitative studies of conversational development, we need (i) data recorded in naturalistic contexts (e.g., child-caregiver dyads talking in their daily environment) where children are more likely to show much of their conversational competencies, as opposed to controlled laboratory contexts which typically involve talking to a stranger (e.g., the experimenter); (ii) data that allows for clear access to children's multimodal behavior in face-to-face conversations; and (iii) data whose acquisition method is cost-effective with the potential of being deployed at a large scale to capture individual and cultural variability. The current work is a first step to achieving this goal. We built a corpus of video chats involving children in middle childhood (6–12 years old) and their caregivers using a weakly structured word-guessing game to prompt spontaneous conversation. The manual annotations of these recordings have shown a similarity in the frequency distribution of multimodal communicative signals from both children and caregivers. As a case study, we capitalize on this rich behavioral data to study how verbal and non-verbal cues contribute to the children's conversational coordination. In particular, we looked at how children learn to engage in coordinated conversations, not only as speakers but also as listeners, by analyzing children's use of backchannel signaling (e.g., verbal “mh” or head nods) during these conversations. Contrary to results from previous in-lab studies, our use of a more spontaneous conversational setting (as well as more adequate controls) revealed that school-age children are strikingly close to adult-level mastery in many measures of backchanneling. Our work demonstrates the usefulness of recent technology in video calling for acquiring quality data that can be used for research on children's conversational development in the wild.
Collapse
|
17
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
18
|
Inclusivity induced adaptive graph learning for multi-view clustering. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023]
|
19
|
Abreu F, Pika S. Turn-taking skills in mammals: A systematic review into development and acquisition. Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.987253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
How human language evolved remains one of the most intriguing questions in science, and different approaches have been used to tackle this question. A recent hypothesis, the Interaction Engine Hypothesis, postulates that language was made possible through the special capacity for social interaction involving different social cognitive skills (e.g., joint attention, common ground) and specific characteristics such as face-to-face interaction, mutual gaze and turn-taking, the exchange of rapid communicative turns. Recently, it has been argued that this turn-taking infrastructure may be a foundational and ancient mechanism of the layered system of language because communicative turn-taking has been found in human infants and across several non-human primate species. Moreover, there is some evidence for turn-taking in different mammalian taxa, especially those capable of vocal learning. Surprisingly, however, the existing studies have mainly focused on turn-taking production of adult individuals, while little is known about its emergence and development in young individuals. Hence, the aim of the current paper was 2-fold: First, we carried out a systematic review of turn-taking development and acquisition in mammals to evaluate possible research bias and existing gaps. Second, we highlight research avenues to spur more research into this domain and investigate if distinct turn-taking elements can be found in other non-human animal species. Since mammals exhibit an extended development period, including learning and strong parental care, they represent an excellent model group in which to investigate the acquisition and development of turn-taking abilities. We performed a systematic review including a wide range of terms and found 21 studies presenting findings on turn-taking abilities in infants and juveniles. Most of these studies were from the last decade, showing an increased interest in this field over the years. Overall, we found a considerable variation in the terminologies and methodological approaches used. In addition, studies investigating turn-taking abilities across different development periods and in relation to different social partners were very rare, thereby hampering direct, systematic comparisons within and across species. Nonetheless, the results of some studies suggested that specific turn-taking elements are innate, while others are acquired during development (e.g., flexibility). Finally, we pinpoint fruitful research avenues and hypotheses to move the field of turn-taking development forward and improve our understanding of the impact of turn-taking on language evolution.
Collapse
|
20
|
Deaf Children Need Rich Language Input from the Start: Support in Advising Parents. CHILDREN (BASEL, SWITZERLAND) 2022; 9:children9111609. [PMID: 36360337 PMCID: PMC9688581 DOI: 10.3390/children9111609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/19/2022] [Indexed: 01/25/2023]
Abstract
Bilingual bimodalism is a great benefit to deaf children at home and in schooling. Deaf signing children perform better overall than non-signing deaf children, regardless of whether they use a cochlear implant. Raising a deaf child in a speech-only environment can carry cognitive and psycho-social risks that may have lifelong adverse effects. For children born deaf, or who become deaf in early childhood, we recommend comprehensible multimodal language exposure and engagement in joint activity with parents and friends to assure age-appropriate first-language acquisition. Accessible visual language input should begin as close to birth as possible. Hearing parents will need timely and extensive support; thus, we propose that, upon the birth of a deaf child and through the preschool years, among other things, the family needs an adult deaf presence in the home for several hours every day to be a linguistic model, to guide the family in taking sign language lessons, to show the family how to make spoken language accessible to their deaf child, and to be an encouraging liaison to deaf communities. While such a support program will be complicated and challenging to implement, it is far less costly than the harm of linguistic deprivation.
Collapse
|
21
|
Rühlemann C. How is emotional resonance achieved in storytellings of sadness/distress? Front Psychol 2022; 13:952119. [PMID: 36248512 PMCID: PMC9559217 DOI: 10.3389/fpsyg.2022.952119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 08/26/2022] [Indexed: 11/13/2022] Open
Abstract
Storytelling pivots around stance seen as a window unto emotion: storytellers project a stance expressing their emotion toward the events and recipients preferably mirror that stance by affiliating with the storyteller’s stance. Whether the recipient’s affiliative stance is at the same time expressive of his/her emotional resonance with the storyteller and of emotional contagion is a question that has recently attracted intriguing research in Physiological Interaction Research. Connecting to this line of inquiry, this paper concerns itself with storytellings of sadness/distress. Its aim is to identify factors that facilitate emotion contagion in storytellings of sadness/distress and factors that impede it. Given the complexity and novelty of this question, this study is designed as a pilot study to scour the terrain and sketch out an interim roadmap before a larger study is undertaken. The data base is small, comprising two storytellings of sadness/distress. The methodology used to address the above research question is expansive: it includes CA methods to transcribe and analyze interactionally relevant aspects of the storytelling interaction; it draws on psychophysiological measures to establish whether and to what degree emotional resonance between co-participants is achieved. In discussing possible reasons why resonance is (not or not fully) achieved, the paper embarks on an extended analysis of the storytellers’ multimodal storytelling performance (reenactments, prosody, gaze, gesture) and considers factors lying beyond the storyteller’s control, including relevance, participation framework, personality, and susceptibility to emotion contagion.
Collapse
|
22
|
Pleyer M, Lepic R, Hartmann S. Compositionality in Different Modalities: A View from Usage-Based Linguistics. INT J PRIMATOL 2022. [DOI: 10.1007/s10764-022-00330-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractThe field of linguistics concerns itself with understanding the human capacity for language. Compositionality is a key notion in this research tradition. Compositionality refers to the notion that the meaning of a complex linguistic unit is a function of the meanings of its constituent parts. However, the question as to whether compositionality is a defining feature of human language is a matter of debate: usage-based and constructionist approaches emphasize the pervasive role of idiomaticity in language, and argue that strict compositionality is the exception rather than the rule. We review the major discussion points on compositionality from a usage-based point of view, taking both spoken and signed languages into account. In addition, we discuss theories that aim at accounting for the emergence of compositional language through processes of cultural transmission as well as the debate of whether animal communication systems exhibit compositionality. We argue for a view that emphasizes the analyzability of complex linguistic units, providing a template for accounting for the multimodal nature of human language.
Collapse
|
23
|
Burkart JM, Adriaense JEC, Brügger RK, Miss FM, Wierucka K, van Schaik CP. A convergent interaction engine: vocal communication among marmoset monkeys. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210098. [PMID: 35876206 PMCID: PMC9315454 DOI: 10.1098/rstb.2021.0098] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/26/2022] [Indexed: 09/14/2023] Open
Abstract
To understand the primate origins of the human interaction engine, it is worthwhile to focus not only on great apes but also on callitrichid monkeys (marmosets and tamarins). Like humans, but unlike great apes, callitrichids are cooperative breeders, and thus habitually engage in coordinated joint actions, for instance when an infant is handed over from one group member to another. We first explore the hypothesis that these habitual cooperative interactions, the marmoset interactional ethology, are supported by the same key elements as found in the human interaction engine: mutual gaze (during joint action), turn-taking, volubility, as well as group-wide prosociality and trust. Marmosets show clear evidence of these features. We next examine the prediction that, if such an interaction engine can indeed give rise to more flexible communication, callitrichids may also possess elaborate communicative skills. A review of marmoset vocal communication confirms unusual abilities in these small primates: high volubility and large vocal repertoires, vocal learning and babbling in immatures, and voluntary usage and control. We end by discussing how the adoption of cooperative breeding during human evolution may have catalysed language evolution by adding these convergent consequences to the great ape-like cognitive system of our hominin ancestors. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- J. M. Burkart
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution ISLE, University of Zurich, Affolternstrasse 56, 8050 Zurich, Switzerland
| | - J. E. C. Adriaense
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - R. K. Brügger
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - F. M. Miss
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - K. Wierucka
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - C. P. van Schaik
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution ISLE, University of Zurich, Affolternstrasse 56, 8050 Zurich, Switzerland
| |
Collapse
|
24
|
Fröhlich M, van Schaik CP. Social tolerance and interactional opportunities as drivers of gestural redoings in orang-utans. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210106. [PMID: 35876198 PMCID: PMC9310174 DOI: 10.1098/rstb.2021.0106] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 02/21/2022] [Indexed: 09/14/2023] Open
Abstract
Communicative repair is a fundamental and universal element of interactive language use. It has been suggested that the persistence and elaboration after communicative breakdown in nonhuman primates constitute two evolutionary building blocks of this capacity, but the conditions favouring it are poorly understood. Because zoo-housed individuals of some species are more social and more terrestrial than in the wild, they should be more likely to show gestural redoings (i.e. both repetition and elaboration) after communicative failure in the coordination of their joint activities. Using a large comparative sample of wild and zoo-housed orang-utans of two different species, we could confirm this prediction for elaboration, the more flexible form of redoings. Specifically, results showed that gestural redoings in general were best predicted by the specific social action context (i.e. social play) and interaction dyad (i.e. beyond mother-offspring), although they were least frequent in captive Bornean orang-utans. For gestural elaboration, we found the expected differences between captive and wild research settings in Borneans, but not in Sumatrans (the more socially tolerant species). Moreover, we found that the effectiveness of elaboration in eliciting responses was higher in Sumatrans, especially the captive ones, whereas effectiveness of mere repetition was influenced by neither species nor setting. We conclude that the socio-ecological environment plays a central role in the emergence of communicative repair strategies in great apes. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Marlen Fröhlich
- Department of Anthropology, University of Zurich, 8057 Zurich, Switzerland
- Paleoanthropology, Institute for Archaeological Sciences, Senckenberg Center for Human Evolution and Paleoenvironment, University of Tübingen, 72070 Tübingen, Germany
| | - Carel P. van Schaik
- Department of Anthropology, University of Zurich, 8057 Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, 8050 Zurich, Switzerland
- Comparative Socioecology Research Group, Max Planck Institute of Animal Behavior, 78467 Konstanz, Germany
| |
Collapse
|
25
|
Holler J. Visual bodily signals as core devices for coordinating minds in interaction. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210094. [PMID: 35876208 PMCID: PMC9310176 DOI: 10.1098/rstb.2021.0094] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Judith Holler
- Max-Planck-Institut für Psycholinguistik, Nijmegen, The Netherlands
- Donders Centre for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
26
|
Bohn M, Liebal K, Oña L, Tessler MH. Great ape communication as contextual social inference: a computational modelling perspective. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210096. [PMID: 35876204 PMCID: PMC9310183 DOI: 10.1098/rstb.2021.0096] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 04/04/2022] [Indexed: 01/03/2023] Open
Abstract
Human communication has been described as a contextual social inference process. Research into great ape communication has been inspired by this view to look for the evolutionary roots of the social, cognitive and interactional processes involved in human communication. This approach has been highly productive, yet it is partly compromised by the widespread focus on how great apes use and understand individual signals. This paper introduces a computational model that formalizes great ape communication as a multi-faceted social inference process that integrates (a) information contained in the signals that make up an utterance, (b) the relationship between communicative partners and (c) the social context. This model makes accurate qualitative and quantitative predictions about real-world communicative interactions between semi-wild-living chimpanzees. When enriched with a pragmatic reasoning process, the model explains repeatedly reported differences between humans and great apes in the interpretation of ambiguous signals (e.g. pointing or iconic gestures). This approach has direct implications for observational and experimental studies of great ape communication and provides a new tool for theorizing about the evolution of uniquely human communication. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Manuel Bohn
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, 04103 Leipzig, Germany
| | - Katja Liebal
- Institute of Biology, Leipzig University, 04103 Leipzig, Germany
| | - Linda Oña
- Naturalistic Social Cognition Group, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Michael Henry Tessler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139-4307, USA
| |
Collapse
|
27
|
Heesen R, Fröhlich M, Sievers C, Woensdregt M, Dingemanse M. Coordinating social action: a primer for the cross-species investigation of communicative repair. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210110. [PMID: 35876201 PMCID: PMC9310172 DOI: 10.1098/rstb.2021.0110] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 12/06/2021] [Indexed: 09/14/2023] Open
Abstract
Human joint action is inherently cooperative, manifested in the collaborative efforts of participants to minimize communicative trouble through interactive repair. Although interactive repair requires sophisticated cognitive abilities, it can be dissected into basic building blocks shared with non-human animal species. A review of the primate literature shows that interactionally contingent signal sequences are at least common among species of non-human great apes, suggesting a gradual evolution of repair. To pioneer a cross-species assessment of repair this paper aims at (i) identifying necessary precursors of human interactive repair; (ii) proposing a coding framework for its comparative study in humans and non-human species; and (iii) using this framework to analyse examples of interactions of humans (adults/children) and non-human great apes. We hope this paper will serve as a primer for cross-species comparisons of communicative breakdowns and how they are repaired. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
| | - Marlen Fröhlich
- Department of Anthropology, University of Zurich, Zurich, Switzerland
- Paleoanthropology, Institute of Archaeological Sciences, Senckenberg Center for Human Evolution and Paleoenvironment, University of Tübingen, Germany
| | | | - Marieke Woensdregt
- Department of Philosophy, Classics, History of Art and Ideas, University of Oslo, Oslo, Norway
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Heesen R, Fröhlich M. Revisiting the human ‘interaction engine': comparative approaches to social action coordination. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210092. [PMID: 35876207 PMCID: PMC9315451 DOI: 10.1098/rstb.2021.0092] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The evolution of language was likely facilitated by a special predisposition for social interaction, involving a set of communicative and cognitive skills summarized as the ‘interaction engine'. This assemblage seems to emerge early in development, to be found universally across cultures, and to enable participation in sophisticated joint action through the addition of spoken language. Yet, new evidence on social action coordination and communication in nonhuman primates warrants an update of the interaction engine hypothesis, particularly with respect to the evolutionary origins of its specific ingredients. However, one enduring problem for comparative research results from a conceptual gulf between disciplines, rendering it difficult to test concepts derived from human interaction research in nonhuman animals. The goal of this theme issue is to make such concepts accessible for comparative research, to promote a fruitful interdisciplinary debate on social action coordination as a new arena of research, and to enable mutual fertilization between human and nonhuman interaction research. In consequence, we here consider relevant theoretical and empirical research within and beyond this theme issue to revisit the interaction engine's shared, convergently derived and uniquely derived ingredients preceding (or perhaps in the last case, succeeding) human language. This article is part of the theme issue ‘Revisiting the human ‘interaction engine’: comparative approaches to social action coordination’.
Collapse
Affiliation(s)
| | - Marlen Fröhlich
- Paleoanthropology, Institute for Archaeological Sciences, Senckenberg Center for Human Evolution and Paleoenvironment, University of Tübingen, Tübingen, Germany
- Department of Anthropology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
29
|
Mondada L, Meguerditchian A. Sequence organization and embodied mutual orientations: openings of social interactions between baboons. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210101. [PMID: 35876203 PMCID: PMC9310171 DOI: 10.1098/rstb.2021.0101] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/08/2022] [Indexed: 09/14/2023] Open
Abstract
Human interactions are organized in sequence, which is a key component of Levinson's 'interaction engine.' Referring back to the field where it originated, conversation analysis, we discuss its relevance within the interaction engine, before moving on to show how sequence organization is actually oriented to not only humans in social interaction, but also to non-human animals. On the basis of video-recorded encounters between baboons (Papio anubis), we study canonical sequences constituting openings and, within them, greetings. Openings are the locus where future interactants adjust to each other to coordinately enter in interaction, thus achieving a common definition of their context, activity, and relationships. The analysis shows that the ways individuals spatially approach each other provide systematic interactional affordances for how the first sequences of actions in the opening are formatted, initiated, and responded to. Adopting sequential multimodal analysis, we demonstrate how participants orient to central features of sequence organization-its sequential implicativeness and the expectations it produces-building on them their interpretations of others' actions, their responsivity, and their mutual understanding of the ongoing course of action as it unfolds. This paves the way for further reflections on the pervasiveness of the interactional engine in human and non-human primate communication. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Lorenza Mondada
- Department of Linguistics, University of Basel, Maiengasse 51, Basel, CH 4056, Switzerland
| | - Adrien Meguerditchian
- Laboratoire de Psychologie Cognitive CNRS_UMR7290; Institute of Language, Communication and the Brain, University Aix-Marseille, Aix-Marseille, France
- Station de Primatologie-Celphedia CNRS UAR846, Rousset, France
| |
Collapse
|
30
|
Levinson SC. The interaction engine: cuteness selection and the evolution of the interactional base for language. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210108. [PMID: 35876196 PMCID: PMC9310178 DOI: 10.1098/rstb.2021.0108] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 01/27/2022] [Indexed: 11/29/2022] Open
Abstract
The deep structural diversity of languages suggests that our language capacities are not based on any single template but rather on an underlying ability and motivation for infants to acquire a culturally transmitted system. The hypothesis is that this ability has an interactional base that has discernable precursors in other primates. In this paper, I explore a specific evolutionary route for the most puzzling aspect of this interactional base in humans, namely the development of an empathetic intentional stance. The route involves a generalization of mother-infant interaction patterns to all adults via a process (cuteness selection) analogous to, but distinct from, RA Fisher's runaway sexual selection. This provides a cornerstone for the carrying capacity for language. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Stephen C. Levinson
- Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Gelderland, The Netherlands
| |
Collapse
|
31
|
Bangerter A, Genty E, Heesen R, Rossano F, Zuberbühler K. Every product needs a process: unpacking joint commitment as a process across species. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210095. [PMID: 35876205 PMCID: PMC9310187 DOI: 10.1098/rstb.2021.0095] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 12/28/2021] [Indexed: 09/14/2023] Open
Abstract
Joint commitment, the feeling of mutual obligation binding participants in a joint action, is typically conceptualized as arising by the expression and acceptance of a promise. This account limits the possibilities of investigating fledgling forms of joint commitment in actors linguistically less well-endowed than adult humans. The feeling of mutual obligation is one aspect of joint commitment (the product), which emerges from a process of signal exchange. It is gradual rather than binary; feelings of mutual obligation can vary in strength according to how explicit commitments are perceived to be. Joint commitment processes are more complex than simple promising, in at least three ways. They are affected by prior joint actions, which create precedents and conventions that can be embodied in material arrangements of institutions. Joint commitment processes also arise as solutions to generic coordination problems related to opening up, maintaining and closing down joint actions. Finally, during joint actions, additional, specific commitments are made piecemeal. These stack up over time and persist, making it difficult for participants to disengage from joint actions. These complexifications open up new perspectives for assessing joint commitment across species. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Adrian Bangerter
- Institute of Work and Organizational Psychology, University of Neuchâtel, Neuchâtel, Switzerland
| | - Emilie Genty
- Institute of Work and Organizational Psychology, University of Neuchâtel, Neuchâtel, Switzerland
- Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
| | | | - Federico Rossano
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Klaus Zuberbühler
- Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| |
Collapse
|
32
|
Pouw W, Fuchs S. Origins Of Vocal-Entangled Gesture. Neurosci Biobehav Rev 2022; 141:104836. [PMID: 36031008 DOI: 10.1016/j.neubiorev.2022.104836] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/12/2022] [Accepted: 08/21/2022] [Indexed: 01/13/2023]
Abstract
Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory-vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal-motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands.
| | - Susanne Fuchs
- Leibniz Center General Linguistics, Berlin, Germany.
| |
Collapse
|
33
|
Berry M, Lewin S, Brown S. Correlated expression of the body, face, and voice during character portrayal in actors. Sci Rep 2022; 12:8253. [PMID: 35585175 PMCID: PMC9117657 DOI: 10.1038/s41598-022-12184-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 05/03/2022] [Indexed: 11/25/2022] Open
Abstract
Actors are required to engage in multimodal modulations of their body, face, and voice in order to create a holistic portrayal of a character during performance. We present here the first trimodal analysis, to our knowledge, of the process of character portrayal in professional actors. The actors portrayed a series of stock characters (e.g., king, bully) that were organized according to a predictive scheme based on the two orthogonal personality dimensions of assertiveness and cooperativeness. We used 3D motion capture technology to analyze the relative expansion/contraction of 6 body segments across the head, torso, arms, and hands. We compared this with previous results for these portrayals for 4 segments of facial expression and the vocal parameters of pitch and loudness. The results demonstrated significant cross-modal correlations for character assertiveness (but not cooperativeness), as manifested collectively in a straightening of the head and torso, expansion of the arms and hands, lowering of the jaw, and a rise in vocal pitch and loudness. These results demonstrate what communication theorists refer to as “multichannel reinforcement”. We discuss this reinforcement in light of both acting theories and theories of human communication more generally.
Collapse
Affiliation(s)
- Matthew Berry
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, ON, L8S 4K1, Canada.
| | - Sarah Lewin
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, ON, L8S 4K1, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, 1280 Main St. West, Hamilton, ON, L8S 4K1, Canada
| |
Collapse
|
34
|
Haiduk F, Fitch WT. Understanding Design Features of Music and Language: The Choric/Dialogic Distinction. Front Psychol 2022; 13:786899. [PMID: 35529579 PMCID: PMC9075586 DOI: 10.3389/fpsyg.2022.786899] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 02/22/2022] [Indexed: 12/03/2022] Open
Abstract
Music and spoken language share certain characteristics: both consist of sequences of acoustic elements that are combinatorically combined, and these elements partition the same continuous acoustic dimensions (frequency, formant space and duration). However, the resulting categories differ sharply: scale tones and note durations of small integer ratios appear in music, while speech uses phonemes, lexical tone, and non-isochronous durations. Why did music and language diverge into the two systems we have today, differing in these specific features? We propose a framework based on information theory and a reverse-engineering perspective, suggesting that design features of music and language are a response to their differential deployment along three different continuous dimensions. These include the familiar propositional-aesthetic ('goal') and repetitive-novel ('novelty') dimensions, and a dialogic-choric ('interactivity') dimension that is our focus here. Specifically, we hypothesize that music exhibits specializations enhancing coherent production by several individuals concurrently-the 'choric' context. In contrast, language is specialized for exchange in tightly coordinated turn-taking-'dialogic' contexts. We examine the evidence for our framework, both from humans and non-human animals, and conclude that many proposed design features of music and language follow naturally from their use in distinct dialogic and choric communicative contexts. Furthermore, the hybrid nature of intermediate systems like poetry, chant, or solo lament follows from their deployment in the less typical interactive context.
Collapse
Affiliation(s)
- Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
35
|
Affiliation(s)
- Elisa Demuru
- Laboratoire Dynamique Du Langage, University of Lyon 2, CNRS UMR 5596, Lyon, France
- Équipe de Neuro-Éthologie Sensorielle, University of Lyon/Saint-Étienne, ENES/CRNL, CNRS UMR 5292, Inserm UMR S 1028, Saint-Étienne, France
| | - Cristina Giacoma
- Laboratoire Dynamique Du Langage, University of Lyon 2, CNRS UMR 5596, Lyon, France
- Department of Life Sciences and Systems Biology, University of Torino, Torino, Italy
| |
Collapse
|
36
|
Trujillo JP, Levinson SC, Holler J. A multi-scale investigation of the human communication system's response to visual disruption. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211489. [PMID: 35425638 PMCID: PMC9006025 DOI: 10.1098/rsos.211489] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 03/25/2022] [Indexed: 05/03/2023]
Abstract
In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are dynamically co-adapted to meet communicative needs. To this end, we parametrically reduce visibility during casual conversational interaction and measure the effects on speakers' communicative behaviour using motion tracking and manual annotation for kinematic and acoustic analyses. We found that visual signalling effort was flexibly adapted in response to a decrease in visual quality (especially motion energy, gesture rate, size, velocity and hold-time). Interestingly, speech was also affected: speech intensity increased in response to reduced visual quality (particularly in speech-gesture utterances, but independently of kinematics). Our findings highlight that multi-modal communicative behaviours are flexibly adapted at multiple scales of measurement and question the notion that gesture plays an inferior role to speech.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| | - Stephen C. Levinson
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
37
|
Rasenberg M, Özyürek A, Bögels S, Dingemanse M. The Primacy of Multimodal Alignment in Converging on Shared Symbols for Novel Referents. DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2021.1992235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Marlou Rasenberg
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Sara Bögels
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Department of Communication and Cognition, Tilburg University
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| |
Collapse
|
38
|
Abstract
It has been suggested that social structure affects the degree of lexical variation in sign language emergence. Evidence from signing communities supports this, with smaller, more insular communities typically displaying a higher degree of lexical variation compared to larger, more dispersed and diverse communities. Though several factors have been proposed to affect the degree of variation, here we focus on how shared context, facilitating the use of iconic signs, facilitates the retention of lexical variation in language emergence. As interlocutors with the same background have similar salient features for real world concepts, shared context allows for the successful communication of iconic mappings between form and culturally salient features (i.e., the meaning specific to an individual based on their cultural context). Because in this case the culturally salient features can be retrieved from the form, there is less pressure to converge on a single form for a concept. We operationalize the relationship between lexical variation and iconic affordances using an agent-based model, studying how shared context and also population size affects the degree of lexical variation in a population of agents. Our model provides support for the relationship between shared context, population size and lexical variation, though several extensions would help improve the explanatory power of this model.
Collapse
|
39
|
Abstract
Human expression is open-ended, versatile, and diverse, ranging from ordinary language use to painting, from exaggerated displays of affection to micro-movements that aid coordination. Here we present and defend the claim that this expressive diversity is united by an interrelated suite of cognitive capacities, the evolved functions of which are the expression and recognition of informative intentions. We describe how evolutionary dynamics normally leash communication to narrow domains of statistical mutual benefit, and how expression is unleashed in humans. The relevant cognitive capacities are cognitive adaptations to living in a partner choice social ecology; and they are, correspondingly, part of the ordinarily developing human cognitive phenotype, emerging early and reliably in ontogeny. In other words, we identify distinctive features of our species' social ecology to explain how and why humans, and only humans, evolved the cognitive capacities that, in turn, lead to massive diversity and open-endedness in means and modes of expression. Language use is but one of these modes of expression, albeit one of manifestly high importance. We make cross-species comparisons, describe how the relevant cognitive capacities can evolve in a gradual manner, and survey how unleashed expression facilitates not only language use, but also novel behaviour in many other domains too, focusing on the examples of joint action, teaching, punishment, and art, all of which are ubiquitous in human societies but relatively rare in other species. Much of this diversity derives from graded aspects of human expression, which can be used to satisfy informative intentions in creative and new ways. We aim to help reorient cognitive pragmatics, as a phenomenon that is not a supplement to linguistic communication and on the periphery of language science, but rather the foundation of the many of the most distinctive features of human behaviour, society, and culture.
Collapse
|
40
|
Pougnault L, Levréro F, Leroux M, Paulet J, Bombani P, Dentressangle F, Deruti L, Mulot B, Lemasson A. Social pressure drives "conversational rules" in great apes. Biol Rev Camb Philos Soc 2021; 97:749-765. [PMID: 34873806 DOI: 10.1111/brv.12821] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 11/19/2021] [Accepted: 11/25/2021] [Indexed: 01/07/2023]
Abstract
In the last decade, two hypotheses, one on the evolution of animal vocal communication in general and the other on the origins of human language, have gained ground. The first hypothesis argues that the complexity of communication co-evolved with the complexity of sociality. Species forming larger groups with complex social networks have more elaborate vocal repertoires. The second hypothesis posits that the core of communication is represented not only by what can be expressed by an isolated caller, but also by the way that vocal interactions are structured, language being above all a social act. Primitive forms of conversational rules based on a vocal turn-taking principle are thought to exist in primates. To support and bring together these hypotheses, more comparative studies of socially diverse species at different levels of the primate phylogeny are needed. However, the majority of available studies focus on monkeys, primates that are distant from the human lineage. Great apes represent excellent candidates for such comparative studies because of their phylogenetic proximity to humans and their varied social lives. We propose that studying vocal turn-taking in apes could address several major gaps regarding the social relevance of vocal turn-taking and the evolutionary trajectory of this behaviour among anthropoids. Indeed, how the social structure of a species may influence the vocal interaction patterns observed among group members remains an open question. We gathered data from the literature as well as original unpublished data (where absent in the literature) on four great ape species: chimpanzees Pan troglodytes, bonobos Pan paniscus, western lowland gorillas Gorilla gorilla gorilla and Bornean orang-utans Pongo pygmaeus. We found no clear-cut relationship between classical social complexity metrics (e.g. number of group members, interaction rates) and vocal complexity parameters (e.g. repertoire size, call rates). Nevertheless, the nature of the society (i.e. group composition, diversity and valence of social bonds) and the type of vocal interaction patterns (isolated calling, call overlap, turn-taking-based vocal exchanges) do appear to be related. Isolated calling is the main vocal pattern found in the species with the smallest social networks (orang-utan), while the other species show vocal interactions that are structured according to temporal rules. A high proportion of overlapping vocalisations is found in the most competitive species (chimpanzee), while vocal turn-taking predominates in more tolerant bonobos and gorillas. Also, preferentially interacting individuals and call types used to interact are not randomly distributed. Vocal overlap ('chorusing') and vocal exchange ('conversing') appear as possible social strategies used to advertise/strengthen social bonds. Our analyses highlight that: (i) vocal turn-taking is also observed in non-human great apes, revealing universal rules for conversing that may be deeply rooted in the primate lineage; (ii) vocal interaction patterns match the species' social lifestyle; (iii) although limited to four species here, adopting a targeted comparative approach could help to identify the multiple and subtle factors underlying social and vocal complexity. We believe that vocal interaction patterns form the basis of a promising field of investigation that may ultimately improve our understanding of the socially driven evolution of communication.
Collapse
Affiliation(s)
- Loïc Pougnault
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie animale et humaine) - UMR 6552, 263 avenue du Général Leclerc, Rennes, 35042, France.,Université de Lyon/Saint-Etienne, CNRS, Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, UMR5292, INSERM UMR_S 1028, 23 rue Paul Michelon, Saint-Etienne, 42023, France.,ZooParc de Beauval & Beauval Nature, Avenue du Blanc, Saint Aignan, 41110, France
| | - Florence Levréro
- Université de Lyon/Saint-Etienne, CNRS, Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, UMR5292, INSERM UMR_S 1028, 23 rue Paul Michelon, Saint-Etienne, 42023, France
| | - Maël Leroux
- Department of Comparative Linguistics, University of Zürich, Thurgauerstrasse 30, Zürich-Oerlikon, 8050, Switzerland.,Budongo Conservation Field Station, Masindi, Uganda.,Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zürich, Plattenstrasse 54, Zürich, 8032, Switzerland
| | - Julien Paulet
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie animale et humaine) - UMR 6552, 263 avenue du Général Leclerc, Rennes, 35042, France
| | - Pablo Bombani
- NGO Mbou-Mon-Tour, Nkala, Territoire de Bolodo, Maï-Ndombe, Democratic Republic of the Congo
| | - Fabrice Dentressangle
- Université de Lyon/Saint-Etienne, CNRS, Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, UMR5292, INSERM UMR_S 1028, 23 rue Paul Michelon, Saint-Etienne, 42023, France
| | - Laure Deruti
- Université de Lyon/Saint-Etienne, CNRS, Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, UMR5292, INSERM UMR_S 1028, 23 rue Paul Michelon, Saint-Etienne, 42023, France
| | - Baptiste Mulot
- ZooParc de Beauval & Beauval Nature, Avenue du Blanc, Saint Aignan, 41110, France
| | - Alban Lemasson
- Univ Rennes, Normandie Univ, CNRS, EthoS (Éthologie animale et humaine) - UMR 6552, 263 avenue du Général Leclerc, Rennes, 35042, France.,Institut Universitaire de France, 1 rue Descartes, Paris, 75231, France
| |
Collapse
|
41
|
|
42
|
Murgiano M, Motamedi Y, Vigliocco G. Situating Language in the Real-World: The Role of Multimodal Iconicity and Indexicality. J Cogn 2021; 4:38. [PMID: 34514309 PMCID: PMC8396123 DOI: 10.5334/joc.113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/06/2020] [Indexed: 11/30/2022] Open
Abstract
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.
Collapse
|
43
|
Trujillo J, Özyürek A, Holler J, Drijvers L. Speakers exhibit a multimodal Lombard effect in noise. Sci Rep 2021; 11:16721. [PMID: 34408178 PMCID: PMC8373897 DOI: 10.1038/s41598-021-95791-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/29/2021] [Indexed: 12/03/2022] Open
Abstract
In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.
Collapse
Affiliation(s)
- James Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands.
| | - Asli Özyürek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| |
Collapse
|
44
|
Nota N, Trujillo JP, Holler J. Facial Signals and Social Actions in Multimodal Face-to-Face Interaction. Brain Sci 2021; 11:1017. [PMID: 34439636 PMCID: PMC8392358 DOI: 10.3390/brainsci11081017] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/07/2021] [Accepted: 07/26/2021] [Indexed: 01/30/2023] Open
Abstract
In a conversation, recognising the speaker's social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - James P. Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
45
|
Zhang Y, Frassinelli D, Tuomainen J, Skipper JI, Vigliocco G. More than words: word predictability, prosody, gesture and mouth movements in natural language comprehension. Proc Biol Sci 2021; 288:20210500. [PMID: 34284631 PMCID: PMC8292779 DOI: 10.1098/rspb.2021.0500] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 06/28/2021] [Indexed: 12/27/2022] Open
Abstract
The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.
Collapse
Affiliation(s)
- Ye Zhang
- Experimental Psychology, University College London, London, UK
| | - Diego Frassinelli
- Department of Linguistics, University of Konstanz, Konstanz, Germany
| | - Jyrki Tuomainen
- Experimental Psychology, Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | | | | |
Collapse
|
46
|
Acarturk C, Indurkya B, Nawrocki P, Sniezynski B, Jarosz M, Usal KA. Gaze aversion in conversational settings: An investigation based on mock job interview. J Eye Mov Res 2021; 14. [PMID: 34122746 PMCID: PMC8188832 DOI: 10.16910/jemr.14.1.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We report the results of an empirical study on gaze aversion during dyadic human-to-human
conversation in an interview setting. To address various methodological challenges in assessing
gaze-to-face contact, we followed an approach where the experiment was conducted
twice, each time with a different set of interviewees. In one of them the interviewer’s gaze
was tracked with an eye tracker, and in the other the interviewee’s gaze was tracked. The
gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time
Markov Chains. The results show that the interviewer made more frequent and longer gaze
contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze
aversions, whereas the interviewee made sideways aversions (left or right). We discuss the
relevance of this research for Human-Robot Interaction, and discuss some future research
problems.
Collapse
Affiliation(s)
- Cengiz Acarturk
- Department of Cognitive Science, Middle East Technical University, Turkey
| | - Bipin Indurkya
- Department of Cognitive Science, Jagiellonian University, Poland
| | - Piotr Nawrocki
- Institute of Computer Science, AGH University of Science and Technology,, Poland
| | | | - Mateusz Jarosz
- Institute of Computer Science, AGH University of Science and Technology,, Poland
| | - Kerem Alp Usal
- Department of Cognitive Science, Middle East Technical University, Turkey
| |
Collapse
|
47
|
Mondémé C. Why study turn‐taking sequences in interspecies interactions? JOURNAL FOR THE THEORY OF SOCIAL BEHAVIOUR 2021. [DOI: 10.1111/jtsb.12295] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Chloé Mondémé
- CNRS (French National Center for Scientific Research) École Normale Supérieure de Lyon Lyon France
| |
Collapse
|
48
|
Ćwiek A, Fuchs S, Draxler C, Asu EL, Dediu D, Hiovain K, Kawahara S, Koutalidis S, Krifka M, Lippus P, Lupyan G, Oh GE, Paul J, Petrone C, Ridouane R, Reiter S, Schümchen N, Szalontai Á, Ünal-Logacev Ö, Zeller J, Winter B, Perlman M. Novel vocalizations are understood across cultures. Sci Rep 2021; 11:10108. [PMID: 33980933 PMCID: PMC8115676 DOI: 10.1038/s41598-021-89445-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/27/2021] [Indexed: 11/21/2022] Open
Abstract
Linguistic communication requires speakers to mutually agree on the meanings of words, but how does such a system first get off the ground? One solution is to rely on iconic gestures: visual signs whose form directly resembles or otherwise cues their meaning without any previously established correspondence. However, it is debated whether vocalizations could have played a similar role. We report the first extensive cross-cultural study investigating whether people from diverse linguistic backgrounds can understand novel vocalizations for a range of meanings. In two comprehension experiments, we tested whether vocalizations produced by English speakers could be understood by listeners from 28 languages from 12 language families. Listeners from each language were more accurate than chance at guessing the intended referent of the vocalizations for each of the meanings tested. Our findings challenge the often-cited idea that vocalizations have limited potential for iconic representation, demonstrating that in the absence of words people can use vocalizations to communicate a variety of meanings.
Collapse
Affiliation(s)
- Aleksandra Ćwiek
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany. .,Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, 10099, Berlin, Germany.
| | - Susanne Fuchs
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany
| | - Christoph Draxler
- Institute of Phonetics and Speech Processing, Ludwig Maximilian University, 80799, Munich, Germany
| | - Eva Liina Asu
- Institute of Estonian and General Linguistics, University of Tartu, 50090, Tartu, Estonia
| | - Dan Dediu
- Laboratoire Dynamique Du Langage UMR 5596, Université Lumière Lyon 2, 69363, Lyon, France
| | - Katri Hiovain
- Department of Digital Humanities, University of Helsinki, 00014, Helsinki, Finland
| | - Shigeto Kawahara
- The Institute of Cultural and Linguistic Studies, Keio University, Mita Minatoku, Tokyo, 108-8345, Japan
| | - Sofia Koutalidis
- Faculty of Linguistics and Literary Studies, Bielefeld University, 33615, Bielefeld, Germany
| | - Manfred Krifka
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany.,Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, 10099, Berlin, Germany
| | - Pärtel Lippus
- Institute of Estonian and General Linguistics, University of Tartu, 50090, Tartu, Estonia
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Grace E Oh
- Department of English Language and Literature, Konkuk University, Seoul, 05029, South Korea
| | - Jing Paul
- Asian Studies Program, Agnes Scott College, Decatur, GA, 30030, USA
| | - Caterina Petrone
- Aix-Marseille Université, CNRS, Laboratoire Parole et Langage, UMR 7309, 13100, Aix-en-Provence, France
| | - Rachid Ridouane
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS & Sorbonne Nouvelle, 75005, Paris, France
| | - Sabine Reiter
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany
| | - Nathalie Schümchen
- Department of Language and Communication, University of Southern Denmark, 5230, Odense, Denmark
| | - Ádám Szalontai
- Department of Phonetics, Hungarian Research Centre for Linguistics, Budapest, 1068, Hungary
| | - Özlem Ünal-Logacev
- School of Health Sciences, Department of Speech and Language Therapy, Istanbul Medipol University, 34810, Istanbul, Turkey
| | - Jochen Zeller
- School of Arts, Linguistics Discipline, University of KwaZulu-Natal, Durban, 4041, South Africa
| | - Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, B15 2TT, UK
| | - Marcus Perlman
- Department of English Language & Linguistics, University of Birmingham, Birmingham, B15 2TT, UK
| |
Collapse
|
49
|
Abstract
This study investigated emoji semantic processing by measuring changes in event-related electroencephalogram (EEG) power. The last segment of experimental sentences was designed as either words or emojis consistent or inconsistent with the sentential context. The results showed that incongruent emojis led to a conspicuous increase of theta power (4–7 Hz), while incongruent words induced a decrease. Furthermore, the theta power increase was observed at midfrontal, occipital and bilateral temporal lobes with emojis. This suggests a higher working memory load for monitoring errors, difficulty of form recognition and concept retrieval in emoji semantic processing. It implies different neuro-cognitive processes involved in the semantic processing of emojis and words.
Collapse
|
50
|
Singletary B. Learning Through Shared Care : Allomaternal Care Impacts Cognitive Development in Early Infancy in a Western Population. HUMAN NATURE-AN INTERDISCIPLINARY BIOSOCIAL PERSPECTIVE 2021; 32:326-362. [PMID: 33970458 DOI: 10.1007/s12110-021-09395-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/05/2021] [Indexed: 12/14/2022]
Abstract
This study investigates how allomaternal care (AMC) impacts human development outside of energetics by evaluating relations between important qualitative and quantitative aspects of AMC and developmental outcomes in a Western population. This study seeks to determine whether there are measurable differences in cognitive and language outcomes as predicted by differences in exposure to AMC via formal (e.g., childcare facilities) and informal (e.g., family and friends) networks. Data were collected from 102 mothers and their typically developing infants aged 13-18 months. AMC predictor data were collected using questionnaires, structured daily diaries, and longitudinal interviews. Developmental outcomes were assessed using the Cognitive, Receptive Language, and Expressive Language subtests of the Bayley III Screening Test. Additional demographic covariates were also evaluated. Akaike Information Criterion (AIC)-informed model selection was used to identify the best-fitting model for each outcome across three working linear regression models. Although AMC variables had no significant effects on Receptive and Expressive Language subtest scores, highly involved familial AMC had a significant medium effect on Cognitive subtest score (β = 0.23, p < 0.01, semi-partial r = 0.28). Formal childcare had no effect on any outcome. This study provides preliminary evidence that there is a measurable connection between AMC and cognitive development in some populations and provides a methodological base from which to assess these connections cross-culturally through future studies. As these effects are attributable to AMC interactions with networks of mostly related individuals, these findings present an area for further investigation regarding the kin selection hypothesis for AMC.
Collapse
Affiliation(s)
- Britt Singletary
- School of Anthropology, University of Arizona, Tucson, AZ, US. .,Crane Center for Early Childhood Research & Policy, The Ohio State University, 175 E. 7th Avenue, Columbus, OH, 43201, US.
| |
Collapse
|