1
|
Cosper SH, Männel C, Mueller JL. Auditory associative word learning in adults: The effects of musical experience and stimulus ordering. Brain Cogn 2024; 180:106207. [PMID: 39053199 DOI: 10.1016/j.bandc.2024.106207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 06/18/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
Evidence for sequential associative word learning in the auditory domain has been identified in infants, while adults have shown difficulties. To better understand which factors may facilitate adult auditory associative word learning, we assessed the role of auditory expertise as a learner-related property and stimulus order as a stimulus-related manipulation in the association of auditory objects and novel labels. We tested in the first experiment auditorily-trained musicians versus athletes (high-level control group) and in the second experiment stimulus ordering, contrasting object-label versus label-object presentation. Learning was evaluated from Event-Related Potentials (ERPs) during training and subsequent testing phases using a cluster-based permutation approach, as well as accuracy-judgement responses during test. Results revealed for musicians a late positive component in the ERP during testing, but neither an N400 (400-800 ms) nor behavioral effects were found at test, while athletes did not show any effect of learning. Moreover, the object-label-ordering group only exhibited emerging association effects during training, while the label-object-ordering group showed a trend-level late ERP effect (800-1200 ms) during test as well as above chance accuracy-judgement scores. Thus, our results suggest the learner-related property of auditory expertise and stimulus-related manipulation of stimulus ordering modulate auditory associative word learning in adults.
Collapse
Affiliation(s)
- Samuel H Cosper
- Chair of Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.
| | - Claudia Männel
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin Berlin, Berlin, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jutta L Mueller
- Department of Linguistics, University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Ter Bekke M, Levinson SC, van Otterdijk L, Kühn M, Holler J. Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition 2024; 248:105806. [PMID: 38749291 DOI: 10.1016/j.cognition.2024.105806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 03/04/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024]
Abstract
The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
Collapse
Affiliation(s)
- Marlijn Ter Bekke
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | | | - Lina van Otterdijk
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Michelle Kühn
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| |
Collapse
|
3
|
Dafreville M, Guidetti M, Bourjade M. Attention-sensitive signalling by 7- to 20-month-old infants in a comparative perspective. Front Psychol 2024; 15:1257324. [PMID: 38562240 PMCID: PMC10982422 DOI: 10.3389/fpsyg.2024.1257324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 03/07/2024] [Indexed: 04/04/2024] Open
Abstract
Attention-sensitive signalling is the pragmatic skill of signallers who adjust the modality of their communicative signals to their recipient's attention state. This study provides the first comprehensive evidence for its onset and development in 7-to 20-month-olds human infants, and underlines its significance for language acquisition and evolutionary history. Mother-infant dyads (N = 30) were studied in naturalistic settings, sampled according to three developmental periods (in months); [7-10], [11-14], and [15-20]. Infant's signals were classified by dominant perceptible sensory modality and proportions compared according to their mother's visual attention, infant-directed speech and tactile contact. Maternal visual attention and infant-directed speech were influential on the onset and steepness of infants' communicative adjustments. The ability to inhibit silent-visual signals towards visually inattentive mothers (unimodal adjustment) predated the ability to deploy audible-or-contact signals in this case (cross-modal adjustment). Maternal scaffolding of infant's early pragmatic skills through her infant-directed speech operates on the facilitation of infant's unimodal adjustment, the preference for oral over gestural signals, and the audio-visual combinations of signals. Additionally, breakdowns in maternal visual attention are associated with increased use of the audible-oral modality/channel. The evolutionary role of the sharing of attentional resources between parents and infants into the emergence of modern language is discussed.
Collapse
Affiliation(s)
| | | | - Marie Bourjade
- CLLE, Université de Toulouse, CNRS, Toulouse, France
- Institut Universitaire de France, Paris, France
| |
Collapse
|
4
|
Trujillo JP, Holler J. Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Sci Rep 2024; 14:2286. [PMID: 38280963 PMCID: PMC10821935 DOI: 10.1038/s41598-024-52589-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 01/20/2024] [Indexed: 01/29/2024] Open
Abstract
Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.
Collapse
Affiliation(s)
- James P Trujillo
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
5
|
Raghavan R, Raviv L, Peeters D. What's your point? Insights from virtual reality on the relation between intention and action in the production of pointing gestures. Cognition 2023; 240:105581. [PMID: 37573692 DOI: 10.1016/j.cognition.2023.105581] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 07/03/2023] [Accepted: 07/26/2023] [Indexed: 08/15/2023]
Abstract
Human communication involves the process of translating intentions into communicative actions. But how exactly do our intentions surface in the visible communicative behavior we display? Here we focus on pointing gestures, a fundamental building block of everyday communication, and investigate whether and how different types of underlying intent modulate the kinematics of the pointing hand and the brain activity preceding the gestural movement. In a dynamic virtual reality environment, participants pointed at a referent to either share attention with their addressee, inform their addressee, or get their addressee to perform an action. Behaviorally, it was observed that these different underlying intentions modulated how long participants kept their arm and finger still, both prior to starting the movement and when keeping their pointing hand in apex position. In early planning stages, a neurophysiological distinction was observed between a gesture that is used to share attitudes and knowledge with another person versus a gesture that mainly uses that person as a means to perform an action. Together, these findings suggest that our intentions influence our actions from the earliest neurophysiological planning stages to the kinematic endpoint of the movement itself.
Collapse
Affiliation(s)
- Renuka Raghavan
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behavior, Nijmegen, The Netherlands
| | - Limor Raviv
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Centre for Social, Cognitive and Affective Neuroscience (cSCAN), University of Glasgow, United Kingdom
| | - David Peeters
- Tilburg University, Department of Communication and Cognition, TiCC, Tilburg, The Netherlands.
| |
Collapse
|
6
|
Trujillo JP, Holler J. Interactionally Embedded Gestalt Principles of Multimodal Human Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1136-1159. [PMID: 36634318 PMCID: PMC10475215 DOI: 10.1177/17456916221141422] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Natural human interaction requires us to produce and process many different signals, including speech, hand and head gestures, and facial expressions. These communicative signals, which occur in a variety of temporal relations with each other (e.g., parallel or temporally misaligned), must be rapidly processed as a coherent message by the receiver. In this contribution, we introduce the notion of interactionally embedded, affordance-driven gestalt perception as a framework that can explain how this rapid processing of multimodal signals is achieved as efficiently as it is. We discuss empirical evidence showing how basic principles of gestalt perception can explain some aspects of unimodal phenomena such as verbal language processing and visual scene perception but require additional features to explain multimodal human communication. We propose a framework in which high-level gestalt predictions are continuously updated by incoming sensory input, such as unfolding speech and visual signals. We outline the constituent processes that shape high-level gestalt perception and their role in perceiving relevance and prägnanz. Finally, we provide testable predictions that arise from this multimodal interactionally embedded gestalt-perception framework. This review and framework therefore provide a theoretically motivated account of how we may understand the highly complex, multimodal behaviors inherent in natural social interaction.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, the Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| |
Collapse
|
7
|
Carston R. The relevance of words and the language/communication divide. Front Psychol 2023; 14:1187343. [PMID: 37575430 PMCID: PMC10419294 DOI: 10.3389/fpsyg.2023.1187343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 05/15/2023] [Indexed: 08/15/2023] Open
Abstract
First, the wide applicability of the relevance-theoretic pragmatic account of how new (ad hoc) senses of words and new (ad hoc) words arise spontaneously in communication/comprehension is demonstrated. The lexical pragmatic processes of meaning modulation and metonymy are shown to apply equally to simple words, noun to verb 'conversions', and morphologically complex cases with non-compositional (atomic) meanings. Second, this pragmatic account is situated within a specific view of the cognitive architecture of language and communication, with the formal side of language, its recursive combinatorial system, argued to have different developmental, evolutionary and cognitive characteristics from the meaning side of language, which is essentially pragmatic/communicative. Words straddle the form/meaning (syntax/pragmatics) divide: on the one hand, they are phrasal structures, consisting of a root and variable numbers of functors, with no privileged status in the syntax; on the other hand, they are salient to language users as basic units of communication and are stored as such, in a communication lexicon, together with their families of related senses, which originated as cases of pragmatically derived (ad hoc) senses but have become established, due to their communicative efficacy and frequency of use. Third, in an attempt to find empirical evidence for the proposed linguistic form-meaning divide, two very different cases of atypical linguistic and communicative development are considered: autistic children and deaf children who develop Homesign. The morpho-syntax (the formal side of language) appears to unfold in much the same way in both cases and is often not much different from that of typically developing children, but they diverge markedly from each other in their communication/pragmatics and their development of a system (a lexicon) of meaningful words/signs.
Collapse
Affiliation(s)
- Robyn Carston
- Linguistics, University College London, London, United Kingdom
| |
Collapse
|
8
|
Wei W, Jiang Z. A bibliometrix-based visualization analysis of international studies on conversations of people with aphasia: Present and prospects. Heliyon 2023; 9:e16839. [PMID: 37346333 PMCID: PMC10279826 DOI: 10.1016/j.heliyon.2023.e16839] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 05/29/2023] [Accepted: 05/30/2023] [Indexed: 06/23/2023] Open
Abstract
In recent years, there has been a rapid increase in the number of people with aphasia due to brain lesions worldwide, which has prompted researchers to carry out in-depth studies on the pathogenesis, inducement and prognosis of aphasia from neurology, clinical medicine, psychology and other disciplines. With the deepening of research and understanding of aphasia, it is generally believed that a single discipline can no longer meet the needs of the academic community. Therefore, multidisciplinary integration has emerged and achieved fruitful results. This paper, based on the biblioshiny package run by R, conducts bibliometric analysis on the international interdisciplinary research status of conversation and aphasia, predicts its future development direction, and provides reference for relevant domestic research from international source journals. The results indicate that led by Australia, the United Kingdom, the United States and other countries, the international conversational aphasia research has formed a complete system, and formed a "descriptive study of patients with language disorders" and "applied study of rehabilitation treatment". In the future, while continuing to focus on these two categories of research, the empathy ability of conversational partners and medical staff may be taken into account, in order to better contribute to improving patients' quality of life.
Collapse
Affiliation(s)
- Wei Wei
- Graduate School, Xi'an International Studies University, School of Foreign Studies, Xi'an Medical University, Xi'an, China
| | - Zhanhao Jiang
- School of Foreign Languages, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Hintz F, Khoe YH, Strauß A, Psomakas AJA, Holler J. Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:340-353. [PMID: 36823247 PMCID: PMC9949912 DOI: 10.3758/s13415-023-01074-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/30/2023] [Indexed: 02/25/2023]
Abstract
In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words.
Collapse
Affiliation(s)
- Florian Hintz
- Max Planck Institute for Psycholinguistics, Nijmegen, NL, The Netherlands.
- Deutscher Sprachatlas, Philipps University of Marburg, Marburg, Germany.
| | - Yung Han Khoe
- Center for Language Studies, Radboud University, Nijmegen, NL, Netherlands
| | | | | | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, NL, The Netherlands
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
10
|
De Felice S, Hamilton AFDC, Ponari M, Vigliocco G. Learning from others is good, with others is better: the role of social interaction in human acquisition of new knowledge. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210357. [PMID: 36571126 PMCID: PMC9791495 DOI: 10.1098/rstb.2021.0357] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Learning in humans is highly embedded in social interaction: since the very early stages of our lives, we form memories and acquire knowledge about the world from and with others. Yet, within cognitive science and neuroscience, human learning is mainly studied in isolation. The focus of past research in learning has been either exclusively on the learner or (less often) on the teacher, with the primary aim of determining developmental trajectories and/or effective teaching techniques. In fact, social interaction has rarely been explicitly taken as a variable of interest, despite being the medium through which learning occurs, especially in development, but also in adulthood. Here, we review behavioural and neuroimaging research on social human learning, specifically focusing on cognitive models of how we acquire semantic knowledge from and with others, and include both developmental as well as adult work. We then identify potential cognitive mechanisms that support social learning, and their neural correlates. The aim is to outline key new directions for experiments investigating how knowledge is acquired in its ecological niche, i.e. socially, within the framework of the two-person neuroscience approach. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.
Collapse
Affiliation(s)
- Sara De Felice
- Institute of Cognitive Neuroscience, University College London (UCL), 17–19 Alexandra House Queen Square, London WC1N 3AZ, UK
| | - Antonia F. de C. Hamilton
- Institute of Cognitive Neuroscience, University College London (UCL), 17–19 Alexandra House Queen Square, London WC1N 3AZ, UK
| | - Marta Ponari
- School of Psychology, University of Kent, Canterbury CT2 7NP, UK
| | | |
Collapse
|
11
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
12
|
Understanding conversational interaction in multiparty conversations: the EVA Corpus. LANG RESOUR EVAL 2022. [DOI: 10.1007/s10579-022-09627-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
AbstractThis paper focuses on gaining new knowledge through observation, qualitative analytics, and cross-modal fusion of rich multi-layered conversational features expressed during multiparty discourse. The outlined research stems from the theory that speech and co-speech gestures originate from the same representation; however, the representation is not solely limited to the speech production process. Thus, the nature of how information is conveyed by synchronously fusing speech and gestures must be investigated in detail. Therefore, this paper introduces an integrated annotation scheme and methodology which opens the opportunity to study verbal (i.e., speech) and non-verbal (i.e., visual cues with a communicative intent) components independently, however, still interconnected over a common timeline. To analyse this interaction between linguistic, paralinguistic, and non-verbal components in multiparty discourse and to help improve natural language generation in embodied conversational agents, a high-quality multimodal corpus, consisting of several annotation layers spanning syntax, POS, dialogue acts, discourse markers, sentiment, emotions, non-verbal behaviour, and gesture units was built and is represented in detail. It is the first of its kind for the Slovenian language. Moreover, detailed case studies show the tendency of metadiscourse to coincide with non-verbal behaviour of non-propositional origin. The case analysis further highlights how the newly created conversational model and the corresponding information-rich consistent corpus can be exploited to deepen the understanding of multiparty discourse.
Collapse
|
13
|
Pleyer M, Lepic R, Hartmann S. Compositionality in Different Modalities: A View from Usage-Based Linguistics. INT J PRIMATOL 2022. [DOI: 10.1007/s10764-022-00330-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractThe field of linguistics concerns itself with understanding the human capacity for language. Compositionality is a key notion in this research tradition. Compositionality refers to the notion that the meaning of a complex linguistic unit is a function of the meanings of its constituent parts. However, the question as to whether compositionality is a defining feature of human language is a matter of debate: usage-based and constructionist approaches emphasize the pervasive role of idiomaticity in language, and argue that strict compositionality is the exception rather than the rule. We review the major discussion points on compositionality from a usage-based point of view, taking both spoken and signed languages into account. In addition, we discuss theories that aim at accounting for the emergence of compositional language through processes of cultural transmission as well as the debate of whether animal communication systems exhibit compositionality. We argue for a view that emphasizes the analyzability of complex linguistic units, providing a template for accounting for the multimodal nature of human language.
Collapse
|
14
|
Holler J. Visual bodily signals as core devices for coordinating minds in interaction. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210094. [PMID: 35876208 PMCID: PMC9310176 DOI: 10.1098/rstb.2021.0094] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
The view put forward here is that visual bodily signals play a core role in human communication and the coordination of minds. Critically, this role goes far beyond referential and propositional meaning. The human communication system that we consider to be the explanandum in the evolution of language thus is not spoken language. It is, instead, a deeply multimodal, multilayered, multifunctional system that developed-and survived-owing to the extraordinary flexibility and adaptability that it endows us with. Beyond their undisputed iconic power, visual bodily signals (manual and head gestures, facial expressions, gaze, torso movements) fundamentally contribute to key pragmatic processes in modern human communication. This contribution becomes particularly evident with a focus that includes non-iconic manual signals, non-manual signals and signal combinations. Such a focus also needs to consider meaning encoded not just via iconic mappings, since kinematic modulations and interaction-bound meaning are additional properties equipping the body with striking pragmatic capacities. Some of these capacities, or its precursors, may have already been present in the last common ancestor we share with the great apes and may qualify as early versions of the components constituting the hypothesized interaction engine. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Judith Holler
- Max-Planck-Institut für Psycholinguistik, Nijmegen, The Netherlands
- Donders Centre for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Bohn M, Liebal K, Oña L, Tessler MH. Great ape communication as contextual social inference: a computational modelling perspective. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210096. [PMID: 35876204 PMCID: PMC9310183 DOI: 10.1098/rstb.2021.0096] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 04/04/2022] [Indexed: 01/03/2023] Open
Abstract
Human communication has been described as a contextual social inference process. Research into great ape communication has been inspired by this view to look for the evolutionary roots of the social, cognitive and interactional processes involved in human communication. This approach has been highly productive, yet it is partly compromised by the widespread focus on how great apes use and understand individual signals. This paper introduces a computational model that formalizes great ape communication as a multi-faceted social inference process that integrates (a) information contained in the signals that make up an utterance, (b) the relationship between communicative partners and (c) the social context. This model makes accurate qualitative and quantitative predictions about real-world communicative interactions between semi-wild-living chimpanzees. When enriched with a pragmatic reasoning process, the model explains repeatedly reported differences between humans and great apes in the interpretation of ambiguous signals (e.g. pointing or iconic gestures). This approach has direct implications for observational and experimental studies of great ape communication and provides a new tool for theorizing about the evolution of uniquely human communication. This article is part of the theme issue 'Revisiting the human 'interaction engine': comparative approaches to social action coordination'.
Collapse
Affiliation(s)
- Manuel Bohn
- Department of Comparative Cultural Psychology, Max Planck Institute for Evolutionary Anthropology, 04103 Leipzig, Germany
| | - Katja Liebal
- Institute of Biology, Leipzig University, 04103 Leipzig, Germany
| | - Linda Oña
- Naturalistic Social Cognition Group, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Michael Henry Tessler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139-4307, USA
| |
Collapse
|
16
|
Gautam RD, Devarakonda B. Towards a bioinformational understanding of AI. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01529-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
17
|
Davidson R, Randhawa G. The Sign 4 Big Feelings Intervention to Improve Early Years Outcomes in Preschool Children: Outcome Evaluation. JMIR Pediatr Parent 2022; 5:e25086. [PMID: 35594062 PMCID: PMC9166658 DOI: 10.2196/25086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/09/2021] [Accepted: 09/18/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Any delays in language development may affect learning, profoundly influencing personal, social, and professional trajectories. The effectiveness of the Sign 4 Big Feelings (S4BF) intervention was investigated by measuring changes in early years outcomes (EYOs) after a 3-month period. OBJECTIVE This study aims to determine whether children's well-being and EYOs significantly improve (beyond typical, expected development) after the S4BF intervention period and whether there are differences between boys and girls in progress achieved. METHODS An evaluation of the S4BF intervention was conducted with 111 preschool-age children in early years settings in Luton, United Kingdom. Listening, speaking, understanding, and managing feelings and behavior, in addition to the Leuven well-being scale, were assessed in a quasi-experimental study design to measure pre- and postintervention outcomes. RESULTS Statistically and clinically significant differences were found for each of the 7 pre- and postmeasures evaluated: words understood and spoken, well-being scores, and the 4 EYO domains. Gender differences were negligible in all analyses. CONCLUSIONS Children of all abilities may benefit considerably from S4BF, but a language-based intervention of this nature may be transformational for children who are behind developmentally, with English as an additional language, or of lower socioeconomic status. TRIAL REGISTRATION ISRCTN Registry ISRCTN42025531; https://doi.org/10.1186/ISRCTN42025531.
Collapse
Affiliation(s)
- Rosemary Davidson
- Institute for Health Research, University of Bedfordshire, Luton, United Kingdom
| | - Gurch Randhawa
- Institute for Health Research, University of Bedfordshire, Luton, United Kingdom
| |
Collapse
|
18
|
Martinez Del Rio A, Ferrara C, Kim SJ, Hakgüder E, Brentari D. Identifying the Correlations Between the Semantics and the Phonology of American Sign Language and British Sign Language: A Vector Space Approach. Front Psychol 2022; 13:806471. [PMID: 35369213 PMCID: PMC8966728 DOI: 10.3389/fpsyg.2022.806471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/07/2022] [Indexed: 11/13/2022] Open
Abstract
Over the history of research on sign languages, much scholarship has highlighted the pervasive presence of signs whose forms relate to their meaning in a non-arbitrary way. The presence of these forms suggests that sign language vocabularies are shaped, at least in part, by a pressure toward maintaining a link between form and meaning in wordforms. We use a vector space approach to test the ways this pressure might shape sign language vocabularies, examining how non-arbitrary forms are distributed within the lexicons of two unrelated sign languages. Vector space models situate the representations of words in a multi-dimensional space where the distance between words indexes their relatedness in meaning. Using phonological information from the vocabularies of American Sign Language (ASL) and British Sign Language (BSL), we tested whether increased similarity between the semantic representations of signs corresponds to increased phonological similarity. The results of the computational analysis showed a significant positive relationship between phonological form and semantic meaning for both sign languages, which was strongest when the sign language lexicons were organized into clusters of semantically related signs. The analysis also revealed variation in the strength of patterns across the form-meaning relationships seen between phonological parameters within each sign language, as well as between the two languages. This shows that while the connection between form and meaning is not entirely language specific, there are cross-linguistic differences in how these mappings are realized for signs in each language, suggesting that arbitrariness as well as cognitive or cultural influences may play a role in how these patterns are realized. The results of this analysis not only contribute to our understanding of the distribution of non-arbitrariness in sign language lexicons, but also demonstrate a new way that computational modeling can be harnessed in lexicon-wide investigations of sign languages.
Collapse
Affiliation(s)
| | - Casey Ferrara
- Department of Psychology, University of Chicago, Chicago, IL, United States
| | - Sanghee J Kim
- Department of Linguistics, University of Chicago, Chicago, IL, United States
| | - Emre Hakgüder
- Department of Linguistics, University of Chicago, Chicago, IL, United States
| | - Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, IL, United States
| |
Collapse
|
19
|
Trujillo JP, Levinson SC, Holler J. A multi-scale investigation of the human communication system's response to visual disruption. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211489. [PMID: 35425638 PMCID: PMC9006025 DOI: 10.1098/rsos.211489] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 03/25/2022] [Indexed: 05/03/2023]
Abstract
In human communication, when the speech is disrupted, the visual channel (e.g. manual gestures) can compensate to ensure successful communication. Whether speech also compensates when the visual channel is disrupted is an open question, and one that significantly bears on the status of the gestural modality. We test whether gesture and speech are dynamically co-adapted to meet communicative needs. To this end, we parametrically reduce visibility during casual conversational interaction and measure the effects on speakers' communicative behaviour using motion tracking and manual annotation for kinematic and acoustic analyses. We found that visual signalling effort was flexibly adapted in response to a decrease in visual quality (especially motion energy, gesture rate, size, velocity and hold-time). Interestingly, speech was also affected: speech intensity increased in response to reduced visual quality (particularly in speech-gesture utterances, but independently of kinematics). Our findings highlight that multi-modal communicative behaviours are flexibly adapted at multiple scales of measurement and question the notion that gesture plays an inferior role to speech.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| | - Stephen C. Levinson
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD Nijmegen, The Netherlands
| |
Collapse
|
20
|
Mechanisms of associative word learning: Benefits from the visual modality and synchrony of labeled objects. Cortex 2022; 152:36-52. [DOI: 10.1016/j.cortex.2022.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 12/05/2021] [Accepted: 03/30/2022] [Indexed: 11/21/2022]
|
21
|
Bastianello T, Keren-Portnoy T, Majorano M, Vihman M. Infant looking preferences towards dynamic faces: A systematic review. Infant Behav Dev 2022; 67:101709. [PMID: 35338995 DOI: 10.1016/j.infbeh.2022.101709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 02/28/2022] [Accepted: 03/06/2022] [Indexed: 11/25/2022]
Abstract
Although the pattern of visual attention towards the region of the eyes is now well-established for infants at an early stage of development, less is known about the extent to which the mouth attracts an infant's attention. Even less is known about the extent to which these specific looking behaviours towards different regions of the talking face (i.e., the eyes or the mouth) may impact on or account for aspects of language development. The aim of the present systematic review is to synthesize and analyse (i) which factors might determine different looking patterns in infants during audio-visual tasks using dynamic faces and (ii) how these patterns have been studied in relation to aspects of the baby's development. Four bibliographic databases were explored, and the records were selected following specified inclusion criteria. The search led to the identification of 19 papers (October 2021). Some studies have tried to clarify the role played by audio-visual support in speech perception and early production based on directly related factors such as the age or language background of the participants, while others have tested the child's competence in terms of linguistic or social skills. Several hypotheses have been advanced to explain the selective attention phenomenon. The results of the selected studies have led to different lines of interpretation. Some suggestions for future research are outlined.
Collapse
Affiliation(s)
| | | | | | - Marilyn Vihman
- Department of Language and Linguistic Science, University of York, UK
| |
Collapse
|
22
|
What is Functional Communication? A Theoretical Framework for Real-World Communication Applied to Aphasia Rehabilitation. Neuropsychol Rev 2022; 32:937-973. [PMID: 35076868 PMCID: PMC9630202 DOI: 10.1007/s11065-021-09531-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Aphasia is an impairment of language caused by acquired brain damage such as stroke or traumatic brain injury, that affects a person’s ability to communicate effectively. The aim of rehabilitation in aphasia is to improve everyday communication, improving an individual’s ability to function in their day-to-day life. For that reason, a thorough understanding of naturalistic communication and its underlying mechanisms is imperative. The field of aphasiology currently lacks an agreed, comprehensive, theoretically founded definition of communication. Instead, multiple disparate interpretations of functional communication are used. We argue that this makes it nearly impossible to validly and reliably assess a person’s communicative performance, to target this behaviour through therapy, and to measure improvements post-therapy. In this article we propose a structured, theoretical approach to defining the concept of functional communication. We argue for a view of communication as “situated language use”, borrowed from empirical psycholinguistic studies with non-brain damaged adults. This framework defines language use as: (1) interactive, (2) multimodal, and (3) contextual. Existing research on each component of the framework from non-brain damaged adults and people with aphasia is reviewed. The consequences of adopting this approach to assessment and therapy for aphasia rehabilitation are discussed. The aim of this article is to encourage a more systematic, comprehensive approach to the study and treatment of situated language use in aphasia.
Collapse
|
23
|
Liebal K, Slocombe KE, Waller BM. The language void 10 years on: multimodal primate communication research is still uncommon. ETHOL ECOL EVOL 2022. [DOI: 10.1080/03949370.2021.2015453] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Katja Liebal
- Life Sciences, Institute of Biology, Leipzig University, Talstrasse 33, Leipzig 04103, Germany
| | | | - Bridget M. Waller
- School of Social Sciences, Nottingham Trent University, Shakespeare Street, Nottingham NG1 4FQ, UK
| |
Collapse
|
24
|
|
25
|
Murgiano M, Motamedi Y, Vigliocco G. Situating Language in the Real-World: Authors' Reply to Commentaries. J Cogn 2021; 4:44. [PMID: 34514315 PMCID: PMC8396114 DOI: 10.5334/joc.181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 07/20/2021] [Indexed: 12/03/2022] Open
|
26
|
Murgiano M, Motamedi Y, Vigliocco G. Situating Language in the Real-World: The Role of Multimodal Iconicity and Indexicality. J Cogn 2021; 4:38. [PMID: 34514309 PMCID: PMC8396123 DOI: 10.5334/joc.113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/06/2020] [Indexed: 11/30/2022] Open
Abstract
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.
Collapse
|
27
|
Trujillo J, Özyürek A, Holler J, Drijvers L. Speakers exhibit a multimodal Lombard effect in noise. Sci Rep 2021; 11:16721. [PMID: 34408178 PMCID: PMC8373897 DOI: 10.1038/s41598-021-95791-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/29/2021] [Indexed: 12/03/2022] Open
Abstract
In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.
Collapse
Affiliation(s)
- James Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands.
| | - Asli Özyürek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Trujillo JP, Holler J. The Kinematics of Social Action: Visual Signals Provide Cues for What Interlocutors Do in Conversation. Brain Sci 2021; 11:996. [PMID: 34439615 PMCID: PMC8393665 DOI: 10.3390/brainsci11080996] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/07/2021] [Accepted: 07/23/2021] [Indexed: 11/17/2022] Open
Abstract
During natural conversation, people must quickly understand the meaning of what the other speaker is saying. This concerns not just the semantic content of an utterance, but also the social action (i.e., what the utterance is doing-requesting information, offering, evaluating, checking mutual understanding, etc.) that the utterance is performing. The multimodal nature of human language raises the question of whether visual signals may contribute to the rapid processing of such social actions. However, while previous research has shown that how we move reveals the intentions underlying instrumental actions, we do not know whether the intentions underlying fine-grained social actions in conversation are also revealed in our bodily movements. Using a corpus of dyadic conversations combined with manual annotation and motion tracking, we analyzed the kinematics of the torso, head, and hands during the asking of questions. Manual annotation categorized these questions into six more fine-grained social action types (i.e., request for information, other-initiated repair, understanding check, stance or sentiment, self-directed, active participation). We demonstrate, for the first time, that the kinematics of the torso, head and hands differ between some of these different social action categories based on a 900 ms time window that captures movements starting slightly prior to or within 600 ms after utterance onset. These results provide novel insights into the extent to which our intentions shape the way that we move, and provide new avenues for understanding how this phenomenon may facilitate the fast communication of meaning in conversational interaction, social action, and conversation.
Collapse
Affiliation(s)
- James P. Trujillo
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 GD Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 GD Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
29
|
Hartmann S, Pleyer M. Constructing a protolanguage: reconstructing prehistoric languages in a usage-based construction grammar framework. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200200. [PMID: 33745320 DOI: 10.1098/rstb.2020.0200] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Construction grammar is an approach to language that posits that units and structures in language can be exhaustively described as pairings between form and meaning. These pairings are called constructions and can have different degrees of abstraction, i.e. they span the entire range from very concrete (armadillo, avocado) to very abstract constructions such as the ditransitive construction (I gave her a book). This approach has been applied to a wide variety of different areas of research in linguistics, such as how new constructions emerge and change historically. It has also been applied to investigate the evolutionary emergence of modern fully fledged language, i.e. the question of how systems of constructions can arise out of prelinguistic communication. In this paper, we review the contribution of usage-based construction grammar approaches to language change and language evolution to the questions of (i) the structure and nature of prehistoric languages and (ii) how constructions in prehistoric languages emerged out of non-linguistic or protolinguistic communication. In particular, we discuss the possibilities of using constructions as the main unit of analysis both in reconstructing predecessors of existing languages (protolanguages) and in formulating theories of how a potential predecessor of human language in general (protolanguage) must have looked like. This article is part of the theme issue 'Reconstructing prehistoric languages'.
Collapse
Affiliation(s)
- Stefan Hartmann
- Germanistische Sprachwissenschaft, University of Düsseldorf, Universitätsstrasse 1, 40225 Düsseldorf, Germany
| | - Michael Pleyer
- Centre for Language Evolution Studies, Nicolaus Copernicus University in Toruń, ul. Gagarina 11, 87-100 Toruń, Poland.,University Centre of Excellence IMSErt-Interacting Minds, Societies, Environments, Nicolaus Copernicus University in Toruń, ul. Gagarina 11, 87-100 Toruń, Poland
| |
Collapse
|
30
|
Abstract
Beat gestures-spontaneously produced biphasic movements of the hand-are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple 'flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.
Collapse
Affiliation(s)
- Hans Rutger Bosker
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500 AH Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - David Peeters
- Department of Communication and Cognition, TiCC Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
31
|
Junker FB, Schlaffke L, Axmacher N, Schmidt-Wilcke T. Impact of multisensory learning on perceptual and lexical processing of unisensory Morse code. Brain Res 2021; 1755:147259. [PMID: 33422535 DOI: 10.1016/j.brainres.2020.147259] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 12/17/2020] [Accepted: 12/19/2020] [Indexed: 11/30/2022]
Abstract
Multisensory learning profits from stimulus congruency at different levels of processing. In the current study, we sought to investigate whether multisensory learning can potentially be based on high-level feature congruency (same meaning) without perceptual congruency (same time) and how this relates to changes in brain function and behaviour. 50 subjects learned to decode Morse code (MC) either in unisensory or different multisensory manners. During unisensory learning, the MC was trained as sequences of auditory trains. For low-level congruent (perceptual) multisensory learning, MC was applied as tactile stimulation to the left hand simultaneously to the auditory stimulation. In contrast, high-level congruent multisensory learning involved auditory training, followed by the production of MC sequences requiring motor actions and thereby excludes perceptual congruency. After learning, group differences were observed within three distinct brain regions while processing unisensory (auditory) MC. Both types of multisensory learning were associated with increased activation in the right inferior frontal gyrus. Multisensory low-level learning elicited additional activation in the somatosensory cortex, while multisensory high-level learners showed a reduced activation in the inferior parietal lobule, which is relevant for decoding MC. Furthermore, differences in brain function associated with multisensory learning was related to behavioural reaction times for both multisensory learning groups. Overall, our data support the idea that multisensory learning is potentially based on high-level features without perceptual congruency. Furthermore, learning of multisensory associations involves neural representations of stimulus features involved in learning, but also share common brain activation (i.e. the right IFG), which seems to serve as a site of multisensory integration.
Collapse
Affiliation(s)
- F B Junker
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Universitätsstraße 150, D-44801 Bochum, Germany; Department of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Universitätsstraße 1, D-40225 Düsseldorf, Germany
| | - L Schlaffke
- Department for Neurology, BG-University Hospital Bergmannsheil, Bürkle de la Camp-Platz 1, D-44789 Bochum, Germany
| | - N Axmacher
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Universitätsstraße 150, D-44801 Bochum, Germany
| | - T Schmidt-Wilcke
- Department of Clinical Neuroscience and Medical Psychology, Heinrich Heine University, Universitätsstraße 1, D-40225 Düsseldorf, Germany; Department of Neurology, St. Mauritius Clinic, Strümper Str. 111, D-40670 Meerbusch, Germany
| |
Collapse
|
32
|
Davidson R, Randhawa G. The Sign 4 Little Talkers Intervention to Improve Listening, Understanding, Speaking, and Behavior in Hearing Preschool Children: Outcome Evaluation. JMIR Pediatr Parent 2020; 3:e15348. [PMID: 32452813 PMCID: PMC7367544 DOI: 10.2196/15348] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 01/17/2020] [Accepted: 02/12/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Gaining age-appropriate proficiency in speech and language in the early years is crucial to later life chances; however, a significant proportion of children fail to meet the expected standards in these early years outcomes when they start school. Factors influencing the development of language and communication include low income, gender, and having English as an additional language (EAL). OBJECTIVE This study aimed to determine whether the Sign 4 Little Talkers (S4LT) program improves key developmental outcomes in hearing preschool children. S4LT was developed to address gaps in the attainment of vocabulary and communication skills in preschool children, identified through routine monitoring of outcomes in early years. Signs were adapted and incorporated into storybooks to improve vocabulary, communication, and behavior in hearing children. METHODS An evaluation of S4LT was conducted to measure key outcomes pre- and postintervention in 8 early years settings in Luton, United Kingdom. A total of 118 preschool children were tested in 4 early years outcomes domains-listening, speaking, understanding, and managing feelings and behavior-as well as Leuven well-being scales and the number of key words understood and spoken. RESULTS Statistically significant results were found for all measures tested: words spoken (P<.001) and understood (P<.001), speaking (P<.001), managing feelings and behavior (P<.001), understanding (P<.001), listening and attention (P<.001), and well-being (P<.001). Approximately two-thirds of the children made expected or good progress, often progressing multiple steps in educational attainment after being assessed as developmentally behind at baseline. CONCLUSIONS The findings reported here suggest that S4LT may help children to catch up with their peers at a crucial stage in development and become school ready by improving their command of language and communication as well as learning social skills. Our analysis also highlights specific groups of children who are not responding as well as expected, namely boys with EAL, and who require additional, tailored support.
Collapse
Affiliation(s)
- Rosemary Davidson
- Institute for Health Research, University of Bedfordshire, Luton, United Kingdom
| | - Gurch Randhawa
- Institute for Health Research, University of Bedfordshire, Luton, United Kingdom
| |
Collapse
|
33
|
Grifoni P, Caschera MC, Ferri F. Evaluation of a dynamic classification method for multimodal ambiguities based on Hidden Markov Models. EVOLVING SYSTEMS 2020. [DOI: 10.1007/s12530-020-09344-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
34
|
Panda EJ, Emami Z, Valiante TA, Pang EW. EEG phase synchronization during semantic unification relates to individual differences in children's vocabulary skill. Dev Sci 2020; 24:e12984. [PMID: 32384181 DOI: 10.1111/desc.12984] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 04/17/2020] [Accepted: 04/21/2020] [Indexed: 11/30/2022]
Abstract
As we listen to speech, our ability to understand what was said requires us to retrieve and bind together individual word meanings into a coherent discourse representation. This so-called semantic unification is a fundamental cognitive skill, and its development relies on the integration of neural activity throughout widely distributed functional brain networks. In this proof-of-concept study, we examine, for the first time, how these functional brain networks develop in children. Twenty-six children (ages 4-17) listened to well-formed sentences and sentences containing a semantic violation, while EEG was recorded. Children with stronger vocabulary showed N400 effects that were more concentrated to centroparietal electrodes and greater EEG phase synchrony (phase lag index; PLI) between right centroparietal and bilateral frontocentral electrodes in the delta frequency band (1-3 Hz) 1.27-1.53 s after listening to well-formed sentences compared to sentences containing a semantic violation. These effects related specifically to individual differences in receptive vocabulary, perhaps pointing to greater recruitment of functional brain networks important for top-down semantic unification with development. Less skilled children showed greater delta phase synchrony for violation sentences 3.41-3.64 s after critical word onset. This later effect was partly driven by individual differences in nonverbal reasoning, perhaps pointing to non-verbal compensatory processing to extract meaning from speech in children with less developed vocabulary. We suggest that functional brain network communication, as measured by momentary changes in the phase synchrony of EEG oscillations, develops throughout the school years to support language comprehension in different ways depending on children's verbal and nonverbal skill levels.
Collapse
Affiliation(s)
- Erin J Panda
- Neurosciences and Mental Health, SickKids Research Institute, Peter Gilgan Centre for Research and Learning, The Hospital for Sick Children, Toronto, ON, Canada.,Epilespy Research Program of the Ontario Brain Institute, Toronto, ON, Canada
| | - Zahra Emami
- Neurosciences and Mental Health, SickKids Research Institute, Peter Gilgan Centre for Research and Learning, The Hospital for Sick Children, Toronto, ON, Canada.,Division of Neurology, The Hospital for Sick Children, Toronto, ON, Canada
| | - Taufik A Valiante
- Epilespy Research Program of the Ontario Brain Institute, Toronto, ON, Canada.,Krembil Research Institute, University Health Network and Toronto Western Hospital, Toronto, Ontario, Canada.,Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Elizabeth W Pang
- Neurosciences and Mental Health, SickKids Research Institute, Peter Gilgan Centre for Research and Learning, The Hospital for Sick Children, Toronto, ON, Canada.,Epilespy Research Program of the Ontario Brain Institute, Toronto, ON, Canada.,Division of Neurology, The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
35
|
Rossi S, Rossi A, Dautenhahn K. The Secret Life of Robots: Perspectives and Challenges for Robot’s Behaviours During Non-interactive Tasks. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00650-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
36
|
Perceptual modality norms for 1,121 Italian words: A comparison with concreteness and imageability scores and an analysis of their impact in word processing tasks. Behav Res Methods 2020; 52:1599-1616. [DOI: 10.3758/s13428-019-01337-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
37
|
Prieur J, Barbu S, Blois‐Heulin C, Lemasson A. The origins of gestures and language: history, current advances and proposed theories. Biol Rev Camb Philos Soc 2019; 95:531-554. [DOI: 10.1111/brv.12576] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 11/30/2019] [Accepted: 12/03/2019] [Indexed: 12/16/2022]
Affiliation(s)
- Jacques Prieur
- Department of Education and PsychologyComparative Developmental Psychology, Freie Universität Berlin Berlin Germany
- Univ Rennes, Normandie Univ, CNRS, EthoS (Ethologie animale et humaine) – UMR 6552 F‐35380 Paimpont France
| | - Stéphanie Barbu
- Univ Rennes, Normandie Univ, CNRS, EthoS (Ethologie animale et humaine) – UMR 6552 F‐35380 Paimpont France
| | - Catherine Blois‐Heulin
- Univ Rennes, Normandie Univ, CNRS, EthoS (Ethologie animale et humaine) – UMR 6552 F‐35380 Paimpont France
| | - Alban Lemasson
- Univ Rennes, Normandie Univ, CNRS, EthoS (Ethologie animale et humaine) – UMR 6552 F‐35380 Paimpont France
| |
Collapse
|
38
|
Too late to be grounded? Motor resonance for action words acquired after middle childhood. Brain Cogn 2019; 138:105509. [PMID: 31855702 DOI: 10.1016/j.bandc.2019.105509] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 11/23/2019] [Accepted: 12/04/2019] [Indexed: 11/22/2022]
Abstract
Though well established for languages acquired in infancy, the role of embodied mechanisms remains poorly understood for languages learned in middle childhood and adulthood. To bridge this gap, we examined 34 experiments that assessed sensorimotor resonance during processing of action-related words in real and artificial languages acquired since age 7 and into adulthood. Evidence from late bilinguals indicates that foreign-language action words modulate neural activity in motor circuits and predictably facilitate or delay physical movements (even in an effector-specific fashion), with outcomes that prove partly sensitive to language proficiency. Also, data from newly learned vocabularies suggest that embodied effects emerge after brief periods of adult language exposure, remain stable through time, and hinge on the performance of bodily movements (and, seemingly, on action observation, too). In sum, our work shows that infant language exposure is not indispensable for the recruitment of embodied mechanisms during language processing, a finding that carries non-trivial theoretical, pedagogical, and clinical implications for neurolinguistics, in general, and bilingualism research, in particular.
Collapse
|
39
|
MacDonald K, Marchman VA, Fernald A, Frank MC. Children flexibly seek visual information to support signed and spoken language comprehension. J Exp Psychol Gen 2019; 149:1078-1096. [PMID: 31750713 DOI: 10.1037/xge0000702] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
40
|
Rafat Y, Stevenson RA. Auditory-orthographic integration at the onset of L2 speech acquisition. LANGUAGE AND SPEECH 2019; 62:427-451. [PMID: 29905093 DOI: 10.1177/0023830918777537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Recent studies have provided evidence for both a positive and a negative effect of orthography on second language speech learning. However, not much is known about whether orthography can trigger a McGurk-like effect (McGurk & MacDonald, 1976) in second language speech learning. This study examined whether exposure to auditory and orthographic input may lead to a McGurk-like effect in naïve English-speaking participants learning a second language with Spanish phonology and orthography. Specifically, it reports on (a) production of non-target-like combinations such as [lj] as in [poljo] for <pollo>-[pojo], where the auditory Spanish [j] and the first language English [l] that correspond to the shared digraph <ll> are integrated, and (b) fusion quantified in terms of [z] devoicing such as [z̥apito] for <zapito>-[zapito]. Moreover, the effects of (a) type of grapheme-to-sound correspondence, (b) position in the word, and (c) condition of training and testing were examined. Participants were assigned to four groups: (a) auditory only, (b) orthography at training and production, (c) orthography at training, and (d) orthography at production. The positions included word-initial and word-medial. The grapheme-to-sound correspondences consisted of <v>-[b], <d>-[δ], <z>-[s] and <ll>-[j]. Results were indicative of a McGurk-like effect only for the Spanish digraph <ll>. The highest rate of combination productions was attested in the orthography-training condition in the word-medial position.
Collapse
|
41
|
Fröhlich M, Sievers C, Townsend SW, Gruber T, van Schaik CP. Multimodal communication and language origins: integrating gestures and vocalizations. Biol Rev Camb Philos Soc 2019; 94:1809-1829. [PMID: 31250542 DOI: 10.1111/brv.12535] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 05/22/2019] [Accepted: 05/29/2019] [Indexed: 12/21/2022]
Abstract
The presence of divergent and independent research traditions in the gestural and vocal domains of primate communication has resulted in major discrepancies in the definition and operationalization of cognitive concepts. However, in recent years, accumulating evidence from behavioural and neurobiological research has shown that both human and non-human primate communication is inherently multimodal. It is therefore timely to integrate the study of gestural and vocal communication. Herein, we review evidence demonstrating that there is no clear difference between primate gestures and vocalizations in the extent to which they show evidence for the presence of key language properties: intentionality, reference, iconicity and turn-taking. We also find high overlap in the neurobiological mechanisms producing primate gestures and vocalizations, as well as in ontogenetic flexibility. These findings confirm that human language had multimodal origins. Nonetheless, we note that in great apes, gestures seem to fulfil a carrying (i.e. predominantly informative) role in close-range communication, whereas the opposite holds for face-to-face interactions of humans. This suggests an evolutionary shift in the carrying role from the gestural to the vocal stream, and we explore this transition in the carrying modality. Finally, we suggest that future studies should focus on the links between complex communication, sociality and cooperative tendency to strengthen the study of language origins.
Collapse
Affiliation(s)
- Marlen Fröhlich
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Christine Sievers
- Department of Philosophy and Media Studies, Philosophy Seminar, University of Basel, Holbeinstrasse 12, 4051, Basel, Switzerland
| | - Simon W Townsend
- Department of Comparative Linguistics, University of Zurich, Plattenstrasse 54, 8032, Zurich, Switzerland.,Department of Psychology, University of Warwick, University Road, CV4 7AL, Coventry, UK
| | - Thibaud Gruber
- Swiss Center for Affective Sciences, CISA, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland.,Department of Zoology, University of Oxford, 11a Mansfield Road, OX1 3SZ, Oxford, UK
| | - Carel P van Schaik
- Department of Anthropology, University of Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| |
Collapse
|
42
|
Agostini G, SturtzSreetharan C, Wutich A, Williams D, Brewis A. Citizen sociolinguistics: A new method to understand fat talk. PLoS One 2019; 14:e0217618. [PMID: 31141560 PMCID: PMC6541281 DOI: 10.1371/journal.pone.0217618] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 05/15/2019] [Indexed: 12/05/2022] Open
Abstract
FAT TALK AND CITIZEN SCIENCE Fat talk is a spontaneous verbal interaction in which interlocutors make self-disparaging comments about the body, usually as a request for assessment. Fat talk often reflects concerns about the self that stem from broader sociocultural factors. It is therefore an important target for sociocultural linguistics. However, real-time studies of fat talk are uncommon due to the resource and time burdens required to capture these fleeting utterances. This limits the scope of data produced using standard sociolinguistic methods. Citizen science may alleviate these burdens by producing a scale of social observation not afforded via traditional methods. Here we present a proof-of-concept for a novel methodology, citizen sociolinguistics. This research approach involves collaborations with citizen researchers to capture forms of conversational data that are typically inaccessible, including fat talk. AIMS AND OUTCOMES This study had two primary aims. Aim 1 focused on scientific output, testing a novel research strategy wherein citizen sociolinguists captured fat talk data in a diverse metropolitan region (Southwestern United States). Results confirm that citizen sociolinguistic research teams captured forms of fat talk that mirrored the scripted responses previously reported. However, they also capture unique forms of fat talk, likely due to greater diversity in sample and sampling environments. Aim 2 focused on the method itself via reflective exercises shared by the citizen sociolinguists throughout the project. In addition to confirming that the citizen sociolinguistic method produces reliable, scientifically valid data, we contend that citizen sociolinguist inclusion has broader scientific benefits which include applied scientific training, fostering sustained relationships between professional researchers and the public, and producing novel, meaningful scientific output that advances professional discourse.
Collapse
Affiliation(s)
- Gina Agostini
- School of Human Evolution and Social Change, Arizona State University, Tempe, Arizona, United States of America
- College of Dental Medicine, Midwestern University, Glendale, Arizona, United States of America
| | - Cindi SturtzSreetharan
- School of Human Evolution and Social Change, Arizona State University, Tempe, Arizona, United States of America
| | - Amber Wutich
- School of Human Evolution and Social Change, Arizona State University, Tempe, Arizona, United States of America
| | - Deborah Williams
- School of Nutrition and Health Promotion, College of Health Solutions, Arizona State University, Phoenix, Arizona, United States of America
| | - Alexandra Brewis
- School of Human Evolution and Social Change, Arizona State University, Tempe, Arizona, United States of America
| |
Collapse
|
43
|
Alviar C, Dale R, Galati A. Complex Communication Dynamics: Exploring the Structure of an Academic Talk. Cogn Sci 2019; 43:e12718. [PMID: 30900289 DOI: 10.1111/cogs.12718] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 01/25/2019] [Accepted: 02/04/2019] [Indexed: 11/30/2022]
Abstract
Communication is a multimodal phenomenon. The cognitive mechanisms supporting it are still understudied. We explored a natural dataset of academic lectures to determine how communication modalities are used and coordinated during the presentation of complex information. Using automated and semi-automated techniques, we extracted and analyzed, from the videos of 30 speakers, measures capturing the dynamics of their body movement, their slide change rate, and various aspects of their speech (speech rate, articulation rate, fundamental frequency, and intensity). There were consistent but statistically subtle patterns in the use of speech rate, articulation rate, intensity, and body motion across the presentation. Principal component analysis also revealed patterns of system-like covariation among modalities. These findings, although tentative, do suggest that the cognitive system is integrating body, slides, and speech in a coordinated manner during natural language use. Further research is needed to clarify the specific coordination patterns that occur between the different modalities.
Collapse
Affiliation(s)
- Camila Alviar
- Cognitive and Information Sciences, University of California, Merced.,Department of Communication, University of California, Los Angeles
| | - Rick Dale
- Cognitive and Information Sciences, University of California, Merced.,Department of Communication, University of California, Los Angeles
| | - Alexia Galati
- Cognitive and Information Sciences, University of California, Merced.,Department of Communication, University of California, Los Angeles.,Department of Psychology, University of Cyprus
| |
Collapse
|
44
|
Cañigueral R, Hamilton AFDC. The Role of Eye Gaze During Natural Social Interactions in Typical and Autistic People. Front Psychol 2019; 10:560. [PMID: 30930822 PMCID: PMC6428744 DOI: 10.3389/fpsyg.2019.00560] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Accepted: 02/28/2019] [Indexed: 12/13/2022] Open
Abstract
Social interactions involve complex exchanges of a variety of social signals, such as gaze, facial expressions, speech and gestures. Focusing on the dual function of eye gaze, this review explores how the presence of an audience, communicative purpose and temporal dynamics of gaze allow interacting partners to achieve successful communication. First, we focus on how being watched modulates social cognition and behavior. We then show that the study of interpersonal gaze processing, particularly gaze temporal dynamics, can provide valuable understanding of social behavior in real interactions. We propose that the Interpersonal Gaze Processing model, which combines both sensing and signaling functions of eye gaze, provides a framework to make sense of gaze patterns in live interactions. Finally, we discuss how autistic individuals process the belief in being watched and interpersonal dynamics of gaze, and suggest that systematic manipulation of factors modulating gaze signaling can reveal which aspects of social eye gaze are challenging in autism.
Collapse
Affiliation(s)
- Roser Cañigueral
- Institute of Cognitive Neuroscience, Division of Psychology and Language Sciences, University College London, London, United Kingdom
| | | |
Collapse
|
45
|
Abstract
People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words.
Collapse
Affiliation(s)
- Pierce Edmiston
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53703, USA
| | - Marcus Perlman
- Department of English Language and Applied Linguistics, University of Birmingham, Birmingham, UK
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53703, USA
| |
Collapse
|
46
|
Kolodny O, Edelman S. The evolution of the capacity for language: the ecological context and adaptive value of a process of cognitive hijacking. Philos Trans R Soc Lond B Biol Sci 2018; 373:rstb.2017.0052. [PMID: 29440518 DOI: 10.1098/rstb.2017.0052] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2017] [Indexed: 01/10/2023] Open
Abstract
Language plays a pivotal role in the evolution of human culture, yet the evolution of the capacity for language-uniquely within the hominin lineage-remains little understood. Bringing together insights from cognitive psychology, neuroscience, archaeology and behavioural ecology, we hypothesize that this singular occurrence was triggered by exaptation, or 'hijacking', of existing cognitive mechanisms related to sequential processing and motor execution. Observed coupling of the communication system with circuits related to complex action planning and control supports this proposition, but the prehistoric ecological contexts in which this coupling may have occurred and its adaptive value remain elusive. Evolutionary reasoning rules out most existing hypotheses regarding the ecological context of language evolution, which focus on ultimate explanations and ignore proximate mechanisms. Coupling of communication and motor systems, although possible in a short period on evolutionary timescales, required a multi-stepped adaptive process, involving multiple genes and gene networks. We suggest that the behavioural context that exerted the selective pressure to drive these sequential adaptations had to be one in which each of the systems undergoing coupling was independently necessary or highly beneficial, as well as frequent and recurring over evolutionary time. One such context could have been the teaching of tool production or tool use. In the present study, we propose the Cognitive Coupling hypothesis, which brings together these insights and outlines a unifying theory for the evolution of the capacity for language.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'.
Collapse
Affiliation(s)
- Oren Kolodny
- Department of Biology, Stanford University, Stanford, CA 94305, USA
| | - Shimon Edelman
- Department of Psychology, Cornell University, Ithaca, NY 14853-7601, USA
| |
Collapse
|
47
|
Murillo E, Ortega C, Otones A, Rujas I, Casla M. Changes in the Synchrony of Multimodal Communication in Early Language Development. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:2235-2245. [PMID: 30090947 DOI: 10.1044/2018_jslhr-l-17-0402] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 04/27/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE The aim of this study is to analyze the changes in temporal synchrony between gesture and speech of multimodal communicative behaviors in the transition from babbling to two-word productions. METHOD Ten Spanish-speaking children were observed at 9, 12, 15, and 18 months of age in a semistructured play situation. We longitudinally analyzed the synchrony between gestures and vocal productions and between their prominent parts. We also explored the relationship between gestural-vocal synchrony and independent measures of language development. RESULTS Results showed that multimodal communicative behaviors tend to be shorter with age, with an increasing overlap of its constituting elements. The same pattern is found when considering the synchrony between the prominent parts. The proportion of overlap between gestural and vocal elements at 15 months of age as well as the proportion of the stroke overlapped with vocalization appear to be related to lexical development 3 months later. CONCLUSIONS These results suggest that children produce gestures and vocalizations as coordinated elements of a single communication system before the transition to the two-word stage. This coordination is related to subsequent lexical development in this period. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.6912242.
Collapse
Affiliation(s)
| | | | | | - Irene Rujas
- Centro de Estudios Superiores Cardenal Cisneros, Madrid, Spain
| | | |
Collapse
|
48
|
Perlman M, Little H, Thompson B, Thompson RL. Iconicity in Signed and Spoken Vocabulary: A Comparison Between American Sign Language, British Sign Language, English, and Spanish. Front Psychol 2018; 9:1433. [PMID: 30154747 PMCID: PMC6102584 DOI: 10.3389/fpsyg.2018.01433] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 07/23/2018] [Indexed: 11/23/2022] Open
Abstract
Considerable evidence now shows that all languages, signed and spoken, exhibit a significant amount of iconicity. We examined how the visual-gestural modality of signed languages facilitates iconicity for different kinds of lexical meanings compared to the auditory-vocal modality of spoken languages. We used iconicity ratings of hundreds of signs and words to compare iconicity across the vocabularies of two signed languages - American Sign Language and British Sign Language, and two spoken languages - English and Spanish. We examined (1) the correlation in iconicity ratings between the languages; (2) the relationship between iconicity and an array of semantic variables (ratings of concreteness, sensory experience, imageability, perceptual strength of vision, audition, touch, smell and taste); (3) how iconicity varies between broad lexical classes (nouns, verbs, adjectives, grammatical words and adverbs); and (4) between more specific semantic categories (e.g., manual actions, clothes, colors). The results show several notable patterns that characterize how iconicity is spread across the four vocabularies. There were significant correlations in the iconicity ratings between the four languages, including English with ASL, BSL, and Spanish. The highest correlation was between ASL and BSL, suggesting iconicity may be more transparent in signs than words. In each language, iconicity was distributed according to the semantic variables in ways that reflect the semiotic affordances of the modality (e.g., more concrete meanings more iconic in signs, not words; more auditory meanings more iconic in words, not signs; more tactile meanings more iconic in both signs and words). Analysis of the 220 meanings with ratings in all four languages further showed characteristic patterns of iconicity across broad and specific semantic domains, including those that distinguished between signed and spoken languages (e.g., verbs more iconic in ASL, BSL, and English, but not Spanish; manual actions especially iconic in ASL and BSL; adjectives more iconic in English and Spanish; color words especially low in iconicity in ASL and BSL). These findings provide the first quantitative account of how iconicity is spread across the lexicons of signed languages in comparison to spoken languages.
Collapse
Affiliation(s)
- Marcus Perlman
- Department of English Language and Applied Linguistic, University of Birmingham, Birmingham, United Kingdom
| | - Hannah Little
- Department of Applied Sciences, University of the West of England, Bristol, United Kingdom
| | - Bill Thompson
- Language and Cognition Department, Max Planck Institute of Psycholinguistics, Nijmegen, Netherlands
| | - Robin L. Thompson
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
49
|
Perniss P. Why We Should Study Multimodal Language. Front Psychol 2018; 9:1109. [PMID: 30002643 PMCID: PMC6032889 DOI: 10.3389/fpsyg.2018.01109] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 06/11/2018] [Indexed: 12/21/2022] Open
Affiliation(s)
- Pamela Perniss
- School of Humanities, University of Brighton, Brighton, United Kingdom
| |
Collapse
|
50
|
Ferrara L, Hodge G. Language as Description, Indication, and Depiction. Front Psychol 2018; 9:716. [PMID: 29875712 PMCID: PMC5974176 DOI: 10.3389/fpsyg.2018.00716] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2017] [Accepted: 04/24/2018] [Indexed: 11/13/2022] Open
Abstract
Signers and speakers coordinate a broad range of intentionally expressive actions within the spatiotemporal context of their face-to-face interactions (Parmentier, 1994; Clark, 1996; Johnston, 1996; Kendon, 2004). Varied semiotic repertoires combine in different ways, the details of which are rooted in the interactions occurring in a specific time and place (Goodwin, 2000; Kusters et al., 2017). However, intense focus in linguistics on conventionalized symbolic form/meaning pairings (especially those which are arbitrary) has obscured the importance of other semiotics in face-to-face communication. A consequence is that the communicative practices resulting from diverse ways of being (e.g., deaf, hearing) are not easily united into a global theoretical framework. Here we promote a theory of language that accounts for how diverse humans coordinate their semiotic repertoires in face-to-face communication, bringing together evidence from anthropology, semiotics, gesture studies and linguistics. Our aim is to facilitate direct comparison of different communicative ecologies. We build on Clark’s (1996) theory of language use as ‘actioned’ via three methods of signaling: describing, indicating, and depicting. Each method is fundamentally different to the other, and they can be used alone or in combination with others during the joint creation of multimodal ‘composite utterances’ (Enfield, 2009). We argue that a theory of language must be able to account for all three methods of signaling as they manifest within and across composite utterances. From this perspective, language—and not only language use—can be viewed as intentionally communicative action involving the specific range of semiotic resources available in situated human interactions.
Collapse
Affiliation(s)
- Lindsay Ferrara
- Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| | - Gabrielle Hodge
- Deafness Cognition and Language Centre, University College London, London, United Kingdom
| |
Collapse
|