1
|
Bauer A, Kuder A, Schulder M, Schepens J. Phonetic differences between affirmative and feedback head nods in German Sign Language (DGS): A pose estimation study. PLoS One 2024; 19:e0304040. [PMID: 38814896 PMCID: PMC11139280 DOI: 10.1371/journal.pone.0304040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 05/04/2024] [Indexed: 06/01/2024] Open
Abstract
This study investigates head nods in natural dyadic German Sign Language (DGS) interaction, with the aim of finding whether head nods serving different functions vary in their phonetic characteristics. Earlier research on spoken and sign language interaction has revealed that head nods vary in the form of the movement. However, most claims about the phonetic properties of head nods have been based on manual annotation without reference to naturalistic text types and the head nods produced by the addressee have been largely ignored. There is a lack of detailed information about the phonetic properties of the addressee's head nods and their interaction with manual cues in DGS as well as in other sign languages, and the existence of a form-function relationship of head nods remains uncertain. We hypothesize that head nods functioning in the context of affirmation differ from those signaling feedback in their form and the co-occurrence with manual items. To test the hypothesis, we apply OpenPose, a computer vision toolkit, to extract head nod measurements from video recordings and examine head nods in terms of their duration, amplitude and velocity. We describe the basic phonetic properties of head nods in DGS and their interaction with manual items in naturalistic corpus data. Our results show that phonetic properties of affirmative nods differ from those of feedback nods. Feedback nods appear to be on average slower in production and smaller in amplitude than affirmation nods, and they are commonly produced without a co-occurring manual element. We attribute the variations in phonetic properties to the distinct roles these cues fulfill in turn-taking system. This research underlines the importance of non-manual cues in shaping the turn-taking system of sign languages, establishing the links between such research fields as sign language linguistics, conversational analysis, quantitative linguistics and computer vision.
Collapse
Affiliation(s)
- Anastasia Bauer
- Department of Linguistics, General Linguistics, University of Cologne, Cologne, Germany
| | - Anna Kuder
- Department of Linguistics, General Linguistics, University of Cologne, Cologne, Germany
| | - Marc Schulder
- Institute for German Sign Language and Communication of the Deaf, University of Hamburg, Hamburg, Germany
| | - Job Schepens
- Department of Linguistics, General Linguistics, University of Cologne, Cologne, Germany
| |
Collapse
|
2
|
Peper A. A general theory of consciousness III the human catastrophe. Commun Integr Biol 2024; 17:2353197. [PMID: 38812722 PMCID: PMC11135873 DOI: 10.1080/19420889.2024.2353197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 05/06/2024] [Indexed: 05/31/2024] Open
Abstract
It is generally assumed that verbal communication can articulate concepts like 'fact' and 'truth' accurately. However, language is fundamentally inaccurate and ambiguous and it is not possible to express exact propositions accurately in an ambiguous medium. Whether truth exists or not, language cannot express it in any exact way. A major problem for verbal communication is that words are fundamentally differently interpreted by the sender and the receiver. In addition, intrapersonal verbal communication - the voice in our head - is a useless extension to the thought process and results in misunderstanding our own thoughts. The evolvement of language has had a profound impact on human life. Most consequential has been that it allowed people to question the old human rules of behavior - the pre-language way of living. As language could not accurately express the old rules, they lost their authority and disappeared. A long period without any rules of how to live together must have followed, probably accompanied by complete chaos. Later, new rules were devised in language, but the new rules were also questioned and had to be enforced by punishment. Language changed the peaceful human way of living under the old rules into violent and aggressive forms of living under punitive control. Religion then tried to incorporate the old rules into the harsh verbal world. The rules were expressed in language through parables: imaginary beings - the gods - who possessed the power of the old rules, but who could be related to through their human appearance and behavior.
Collapse
Affiliation(s)
- Abraham Peper
- Department of Biomedical Engineering & Physics, Academic Medical Centre, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Nirme J, Gulz A, Haake M, Gullberg M. Early or synchronized gestures facilitate speech recall-a study based on motion capture data. Front Psychol 2024; 15:1345906. [PMID: 38596333 PMCID: PMC11002957 DOI: 10.3389/fpsyg.2024.1345906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/07/2024] [Indexed: 04/11/2024] Open
Abstract
Introduction Temporal co-ordination between speech and gestures has been thoroughly studied in natural production. In most cases gesture strokes precede or coincide with the stressed syllable in words that they are semantically associated with. Methods To understand whether processing of speech and gestures is attuned to such temporal coordination, we investigated the effect of delaying, preposing or eliminating individual gestures on the memory for words in an experimental study in which 83 participants watched video sequences of naturalistic 3D-animated speakers generated based on motion capture data. A target word in the sequence appeared (a) with a gesture presented in its original position synchronized with speech, (b) temporally shifted 500 ms before or (c) after the original position, or (d) with the gesture eliminated. Participants were asked to retell the videos in a free recall task. The strength of recall was operationalized as the inclusion of the target word in the free recall. Results Both eliminated and delayed gesture strokes resulted in reduced recall rates compared to synchronized strokes, whereas there was no difference between advanced (preposed) and synchronized strokes. An item-level analysis also showed that the greater the interval between the onsets of delayed strokes and stressed syllables in target words, the greater the negative effect was on recall. Discussion These results indicate that speech-gesture synchrony affects memory for speech, and that temporal patterns that are common in production lead to the best recall. Importantly, the study also showcases a procedure for using motion capture-based 3D-animated speakers to create an experimental paradigm for the study of speech-gesture comprehension.
Collapse
Affiliation(s)
- Jens Nirme
- Lund University Cognitive Science, Lund, Sweden
| | - Agneta Gulz
- Lund University Cognitive Science, Lund, Sweden
| | | | - Marianne Gullberg
- Centre for Languages and Literature and Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
4
|
Winter B, Lupyan G, Perry LK, Dingemanse M, Perlman M. Iconicity ratings for 14,000+ English words. Behav Res Methods 2024; 56:1640-1655. [PMID: 37081237 DOI: 10.3758/s13428-023-02112-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK.
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Lynn K Perry
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Marcus Perlman
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK
| |
Collapse
|
5
|
Trujillo JP, Holler J. Conversational facial signals combine into compositional meanings that change the interpretation of speaker intentions. Sci Rep 2024; 14:2286. [PMID: 38280963 PMCID: PMC10821935 DOI: 10.1038/s41598-024-52589-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 01/20/2024] [Indexed: 01/29/2024] Open
Abstract
Human language is extremely versatile, combining a limited set of signals in an unlimited number of ways. However, it is unknown whether conversational visual signals feed into the composite utterances with which speakers communicate their intentions. We assessed whether different combinations of visual signals lead to different intent interpretations of the same spoken utterance. Participants viewed a virtual avatar uttering spoken questions while producing single visual signals (i.e., head turn, head tilt, eyebrow raise) or combinations of these signals. After each video, participants classified the communicative intention behind the question. We found that composite utterances combining several visual signals conveyed different meaning compared to utterances accompanied by the single visual signals. However, responses to combinations of signals were more similar to the responses to related, rather than unrelated, individual signals, indicating a consistent influence of the individual visual signals on the whole. This study therefore provides first evidence for compositional, non-additive (i.e., Gestalt-like) perception of multimodal language.
Collapse
Affiliation(s)
- James P Trujillo
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands.
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Mills G, Redeker G. Self-Repair Increases Referential Coordination. Cogn Sci 2023; 47:e13329. [PMID: 37606349 DOI: 10.1111/cogs.13329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 06/26/2023] [Accepted: 07/20/2023] [Indexed: 08/23/2023]
Abstract
When interlocutors repeatedly describe referents to each other, they rapidly converge on referring expressions which become increasingly systematized and abstract as the interaction progresses. Previous experimental research suggests that interactive repair mechanisms in dialogue underpin convergence. However, this research has so far only focused on the role of other-initiated repair and has not examined whether self-initiated repair might also play a role. To investigate this question, we report the results from a computer-mediated maze task experiment. In this task, participants communicate with each other via an experimental chat tool, which selectively transforms participants' private turn-revisions into public self-repairs that are made visible to the other participant. For example, if a participant, A, types "On the top square," and then before sending, A revises the turn to "On the top row," the server automatically detects the revision and transforms the private turn-revisions into a public self-repair, for example, "On the top square umm I meant row." Participants who received these transformed turns used more abstract and systematized referring expressions, but performed worse at the task. We argue that this is due to the artificial self-repairs causing participants to put more effort into diagnosing and resolving the referential coordination problems they face in the task, yielding better grounded spatial semantics and consequently increased use of abstract referring expressions.
Collapse
Affiliation(s)
- Gregory Mills
- Centre for Language and Cognition (CLCG), Faculty of Arts, University of Groningen
- School of Computer Science and Mathematics, Kingston University
| | - Gisela Redeker
- Centre for Language and Cognition (CLCG), Faculty of Arts, University of Groningen
| |
Collapse
|
7
|
Kilpatrick A, Ćwiek A, Lewis E, Kawahara S. A cross-linguistic, sound symbolic relationship between labial consonants, voiced plosives, and Pokémon friendship. Front Psychol 2023; 14:1113143. [PMID: 36910799 PMCID: PMC10000297 DOI: 10.3389/fpsyg.2023.1113143] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/07/2023] [Indexed: 03/14/2023] Open
Abstract
Introduction This paper presents a cross-linguistic study of sound symbolism, analysing a six-language corpus of all Pokémon names available as of January 2022. It tests the effects of labial consonants and voiced plosives on a Pokémon attribute known as friendship. Friendship is a mechanic in the core series of Pokémon video games that arguably reflects how friendly each Pokémon is. Method Poisson regression is used to examine the relationship between the friendship mechanic and the number of times /p/, /b/, /d/, /m/, /g/, and /w/ occur in the names of English, Japanese, Korean, Chinese, German, and French Pokémon. Results Bilabial plosives, /p/ and /b/, typically represent high friendship values in Pokémon names while /m/, /d/, and /g/ typically represent low friendship values. No association is found for /w/ in any language. Discussion Many of the previously known cases of cross-linguistic sound symbolic patterns can be explained by the relationship between how sounds in words are articulated and the physical qualities of the referents. This study, however, builds upon the underexplored relationship between sound symbolism and abstract qualities.
Collapse
|
8
|
Motamedi Y, Wolters L, Schouwstra M, Kirby S. The Effects of Iconicity and Conventionalization on Word Order Preferences. Cogn Sci 2022; 46:e13203. [PMID: 36251421 PMCID: PMC9787421 DOI: 10.1111/cogs.13203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 08/12/2022] [Accepted: 09/04/2022] [Indexed: 12/30/2022]
Abstract
Of the six possible orderings of the three main constituents of language (subject, verb, and object), two-SOV and SVO-are predominant cross-linguistically. Previous research using the silent gesture paradigm in which hearing participants produce or respond to gestures without speech has shown that different factors such as reversibility, salience, and animacy can affect the preferences for different orders. Here, we test whether participants' preferences for orders that are conditioned on the semantics of the event change depending on (i) the iconicity of individual gestural elements and (ii) the prior knowledge of a conventional lexicon. Our findings demonstrate the same preference for semantically conditioned word order found in previous studies, specifically that SOV and SVO are preferred differentially for different types of events. We do not find that iconicity of individual gestures affects participants' ordering preferences; however, we do find that learning a lexicon leads to a stronger preference for SVO-like orders overall. Finally, we compare our findings from English speakers, using an SVO-dominant language, with data from speakers of an SOV-dominant language, Turkish. We find that, while learning a lexicon leads to an increase in SVO preference for both sets of participants, this effect is mediated by language background and event type, suggesting that an interplay of factors together determines preferences for different ordering patterns. Taken together, our results support a view of word order as a gradient phenomenon responding to multiple biases.
Collapse
Affiliation(s)
| | - Lucie Wolters
- Centre for Language EvolutionThe University of Edinburgh
| | | | - Simon Kirby
- Centre for Language EvolutionThe University of Edinburgh
| |
Collapse
|
9
|
Rasenberg M, Özyürek A, Bögels S, Dingemanse M. The Primacy of Multimodal Alignment in Converging on Shared Symbols for Novel Referents. DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2021.1992235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Marlou Rasenberg
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Asli Özyürek
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Sara Bögels
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
- Department of Communication and Cognition, Tilburg University
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University
- Max Planck Institute for Psycholinguistics
- Communicative Alignment in Brain and Behaviour team, Language in Interaction consortium, the Netherlands
| |
Collapse
|
10
|
Fay N, Walker B, Ellison TM, Blundell Z, De Kleine N, Garde M, Lister CJ, Goldin-Meadow S. Gesture is the primary modality for language creation. Proc Biol Sci 2022; 289:20220066. [PMID: 35259991 PMCID: PMC8905156 DOI: 10.1098/rspb.2022.0066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 02/14/2022] [Indexed: 12/05/2022] Open
Abstract
How language began is one of the oldest questions in science, but theories remain speculative due to a lack of direct evidence. Here, we report two experiments that generate empirical evidence to inform gesture-first and vocal-first theories of language origin; in each, we tested modern humans' ability to communicate a range of meanings (995 distinct words) using either gesture or non-linguistic vocalization. Experiment 1 is a cross-cultural study, with signal Producers sampled from Australia (n = 30, Mage = 32.63, s.d. = 12.42) and Vanuatu (n = 30, Mage = 32.40, s.d. = 11.76). Experiment 2 is a cross-experiential study in which Producers were either sighted (n = 10, Mage = 39.60, s.d. = 11.18) or severely vision-impaired (n = 10, Mage = 39.40, s.d. = 10.37). A group of undergraduate student Interpreters guessed the meaning of the signals created by the Producers (n = 140). Communication success was substantially higher in the gesture modality than the vocal modality (twice as high overall; 61.17% versus 29.04% success). This was true within cultures, across cultures and even for the signals produced by severely vision-impaired participants. The success of gesture is attributed in part to its greater universality (i.e. similarity in form across different Producers). Our results support the hypothesis that gesture is the primary modality for language creation.
Collapse
Affiliation(s)
- Nicolas Fay
- School of Psychological Science, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| | - Bradley Walker
- School of Psychological Science, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| | - T. Mark Ellison
- Collaborative Research Centre for Linguistic Prominence, University of Cologne, Cologne, NRW, Germany
| | - Zachary Blundell
- School of Psychological Science, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| | - Naomi De Kleine
- School of Psychological Science, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| | - Murray Garde
- School of Culture, History and Language, College of Asia and the Pacific, Australian National University, Canberra, ACT, Australia
| | - Casey J. Lister
- School of Psychological Science, University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia
| | | |
Collapse
|
11
|
Żywiczyński P, Wacewicz S, Lister C. Pantomimic fossils in modern human communication. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200204. [PMID: 33745309 PMCID: PMC8059511 DOI: 10.1098/rstb.2020.0204] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/25/2021] [Indexed: 02/02/2023] Open
Abstract
Bodily mimesis, the capacity to use the body representationally, was one of the key innovations that allowed early humans to go beyond the 'baseline' of generalized ape communication and cognition. We argue that the original human-specific communication afforded by bodily mimesis was based on signs that involve three entities: an expression that represents an object (i.e. communicated content) for an interpreter. We further propose that the core component of this communication, pantomime, was able to transmit referential information that was not limited to select semantic domains or the 'here-and-now', by means of motivated-most importantly iconic-signs. Pressures for expressivity and economy then led to conventionalization of signs and a growth of linguistic characteristics: semiotic systematicity and combinatorial expression. Despite these developments, both naturalistic and experimental data suggest that the system of pantomime did not disappear and is actively used by modern humans. Its contemporary manifestations, or pantomimic fossils, emerge when language cannot be used, for instance when people do not share a common language, or in situations where the use of (spoken) language is difficult, impossible or forbidden. Under such circumstances, people bootstrap communication by means of pantomime and, when these circumstances persist, newly emergent pantomimic communication becomes increasingly language-like. This article is part of the theme issue 'Reconstructing prehistoric languages'.
Collapse
Affiliation(s)
- Przemysław Żywiczyński
- Center for Language Evolution Studies, Nicolaus Copernicus University in Torun, 87-100 Torun, Kujawsko-Pomorskie, Poland
| | - Sławomir Wacewicz
- Center for Language Evolution Studies, Nicolaus Copernicus University in Torun, 87-100 Torun, Kujawsko-Pomorskie, Poland
| | - Casey Lister
- Faculty of Science, School of Psychological Science, The University of Western Australia, 35 Stirling Highway, 6009 Perth, WA, Australia
| |
Collapse
|
12
|
Ferretti F, Adornetti I. Persuasive conversation as a new form of communication in Homo sapiens. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200196. [PMID: 33745315 DOI: 10.1098/rstb.2020.0196] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
The aim of this paper is twofold: to propose that conversation is the distinctive feature of Homo sapiens' communication; and to show that the emergence of modern language is tied to the transition from pantomime to verbal and grammatically complex forms of narrative. It is suggested that (animal and human) communication is a form of persuasion and that storytelling was the best tool developed by humans to convince others. In the early stage of communication, archaic hominins used forms of pantomimic storytelling to persuade others. Although pantomime is a powerful tool for persuasive communication, it is proposed that it is not an effective tool for persuasive conversation: conversation is characterized by a form of reciprocal persuasion among peers; instead, pantomime has a mainly asymmetrical character. The selective pressure towards persuasive reciprocity of the conversational level is the evolutionary reason that allowed the transition from pantomime to grammatically complex codes in H. sapiens, which favoured the evolution of speech. This article is part of the theme issue 'Reconstructing prehistoric languages'.
Collapse
Affiliation(s)
- Francesco Ferretti
- Department of Philosophy, Communication and Performing Arts, Roma Tre University, 00146 Rome, Italy
| | - Ines Adornetti
- Department of Philosophy, Communication and Performing Arts, Roma Tre University, 00146 Rome, Italy
| |
Collapse
|