1
|
Ibrahim S, Clarke M, Vasalou A, Bezemer J. Common ground in AAC: how children who use AAC and teaching staff shape interaction in the multimodal classroom. Augment Altern Commun 2024; 40:74-85. [PMID: 38047627 DOI: 10.1080/07434618.2023.2283853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 09/24/2023] [Indexed: 12/05/2023] Open
Abstract
Children who use augmentative and alternative communication (AAC) are multimodal communicators. However, in classroom interactions involving children and staff, achieving mutual understanding and accomplishing task-oriented goals by attending to the child's unaided AAC can be challenging. This study draws on excerpts of video recordings of interactions in a classroom for 6-9-year-old children who used AAC to explore how three child participants used the range of multimodal resources available to them - vocal, movement-based, and gestural, technological, temporal - to shape (and to some degree, co-control) classroom interactions. Our research was concerned with examining achievements and problems in establishing a sense of common ground and the realization of child agency. Through detailed multimodal analysis, this paper renders visible different types of practices rejecting a request for clarification, drawing new parties into a conversation, disrupting whole-class teacher talk-through which the children in the study voiced themselves in persuasive ways. It concludes by suggesting that multimodal accounts paint a more nuanced picture of children's resourcefulness and conversational asymmetry that highlights children's agency amidst material, semiotic, and institutional constraints.
Collapse
Affiliation(s)
- Seray Ibrahim
- Institute of Education, University College London, London, UK
- Department of Informatics, King's College London, London, UK
| | - Michael Clarke
- Department of Speech, Language and Hearing Sciences, San Francisco State University, San Francisco, CA, USA
| | - Asimina Vasalou
- Institute of Education, University College London, London, UK
| | - Jeff Bezemer
- Institute of Education, University College London, London, UK
| |
Collapse
|
2
|
Kosie JE, Lew-Williams C. Infant-directed communication: Examining the many dimensions of everyday caregiver-infant interactions. Dev Sci 2024:e13515. [PMID: 38618899 DOI: 10.1111/desc.13515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 03/22/2024] [Accepted: 03/30/2024] [Indexed: 04/16/2024]
Abstract
Everyday caregiver-infant interactions are dynamic and multidimensional. However, existing research underestimates the dimensionality of infants' experiences, often focusing on one or two communicative signals (e.g., speech alone, or speech and gesture together). Here, we introduce "infant-directed communication" (IDC): the suite of communicative signals from caregivers to infants including speech, action, gesture, emotion, and touch. We recorded 10 min of at-home play between 44 caregivers and their 18- to 24-month-old infants from predominantly white, middle-class, English-speaking families in the United States. Interactions were coded for five dimensions of IDC as well as infants' gestures and vocalizations. Most caregivers used all five dimensions of IDC throughout the interaction, and these dimensions frequently overlapped. For example, over 60% of the speech that infants heard was accompanied by one or more non-verbal communicative cues. However, we saw marked variation across caregivers in their use of IDC, likely reflecting tailored communication to the behaviors and abilities of their infant. Moreover, caregivers systematically increased the dimensionality of IDC, using more overlapping cues in response to infant gestures and vocalizations, and more IDC with infants who had smaller vocabularies. Understanding how and when caregivers use all five signals-together and separately-in interactions with infants has the potential to redefine how developmental scientists conceive of infants' communicative environments, and enhance our understanding of the relations between caregiver input and early learning. RESEARCH HIGHLIGHTS: Infants' everyday interactions with caregivers are dynamic and multimodal, but existing research has underestimated the multidimensionality (i.e., the diversity of simultaneously occurring communicative cues) inherent in infant-directed communication. Over 60% of the speech that infants encounter during at-home, free play interactions overlap with one or more of a variety of non-speech communicative cues. The multidimensionality of caregivers' communicative cues increases in response to infants' gestures and vocalizations, providing new information about how infants' own behaviors shape their input. These findings emphasize the importance of understanding how caregivers use a diverse set of communicative behaviors-both separately and together-during everyday interactions with infants.
Collapse
Affiliation(s)
- Jessica E Kosie
- Department of Psychology, Princeton University, Princeton, New Jersey, USA
- School of Social and Behavioral Sciences, Arizona State University, Phoenix, Arizona, USA
| | - Casey Lew-Williams
- Department of Psychology, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
3
|
Seijdel N, Schoffelen JM, Hagoort P, Drijvers L. Attention Drives Visual Processing and Audiovisual Integration During Multimodal Communication. J Neurosci 2024; 44:e0870232023. [PMID: 38199864 PMCID: PMC10919203 DOI: 10.1523/jneurosci.0870-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/12/2024] Open
Abstract
During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.
Collapse
Affiliation(s)
- Noor Seijdel
- Neurobiology of Language Department - The Communicative Brain, Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
| | - Jan-Mathijs Schoffelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, 6525 HT, The Netherlands
| | - Peter Hagoort
- Neurobiology of Language Department - The Communicative Brain, Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, 6525 HT, The Netherlands
| | - Linda Drijvers
- Neurobiology of Language Department - The Communicative Brain, Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, 6525 HT, The Netherlands
| |
Collapse
|
4
|
Duboisdindien G. The analysis of gestural and verbal pragmatic markers produced by Mild Cognitive Impaired participants during longitudinal and autobiographical interviews. Clin Linguist Phon 2024; 38:116-137. [PMID: 36755395 DOI: 10.1080/02699206.2023.2174450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 12/15/2022] [Accepted: 01/14/2023] [Indexed: 06/18/2023]
Abstract
CONTEXT This corpus-based study presents a multimodal analysis of verbal pragmatic markers and non-verbal pragmatic markers in elderly people with Mild Cognitive Impairment aged over 75 years. METHODS The corpus collection and analysis methodology has been described in the Belgian CorpAGEst transversal study and the French VintAGE longitudinal and transversal oriented pilot studies. The protocols are available online in both English and French. RESULTS & CONCLUSION Our general findings indicate that with ageing, verbal pragmatic markers acquire an interactive function that allows people with MCI to maintain intersubjective relationships with their interlocutor. Furthermore, at the non-verbal level, gestural manifestations are increasingly used over time with a preference for non-verbal pragmatic markers with a referential function and an adaptive function. We aim to show the benefits of linguistic and interactional scientific investigation methods through cognitive impaired ageing for clinicians and family caregivers.
Collapse
|
5
|
Schäfer M, Sydow D, Schauer M, Doumbia J, Schmitt T, Rödel MO. Species- and sex-specific chemical composition from an internal gland-like tissue of an African frog family. Proc Biol Sci 2024; 291:20231693. [PMID: 38196358 PMCID: PMC10777154 DOI: 10.1098/rspb.2023.1693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/27/2023] [Indexed: 01/11/2024] Open
Abstract
Intraspecific chemical communication in frogs is understudied and the few published cases are limited to externally visible and male-specific breeding glands. Frogs of the family Odontobatrachidae, a West African endemic complex of five morphologically cryptic species, have large, fatty gland-like strands along their lower mandible. We investigated the general anatomy of this gland-like strand and analysed its chemical composition. We found the strand to be present in males and females of all species. The strand varies in markedness, with well-developed strands usually found in reproductively active individuals. The strands are situated under particularly thin skin sections, the vocal sac in male frogs and a respective area in females. Gas-chromatography/mass spectrometry and multivariate analysis revealed that the strands contain sex- and species-specific chemical profiles, which are consistent across geographically distant populations. The profiles varied between reproductive and non-reproductive individuals. These results indicate that the mandibular strands in the Odontobatrachidae comprise a so far overlooked structure (potentially a gland) that most likely plays a role in the mating and/or breeding behaviour of the five Odontobatrachus species. Our results highlight the relevance of multimodal signalling in anurans, and indicate that chemical communication in frogs may not be restricted to sexually dimorphic, apparent skin glands.
Collapse
Affiliation(s)
- Marvin Schäfer
- Museum für Naturkunde – Leibniz Institute for Evolution and Biodiversity Science, Invalidenstraße 43, 10115 Berlin, Germany
| | - David Sydow
- Zoology III Department of Animal Ecology and Tropical Biology, University of Würzburg, Am Hubland, 97074 Würzburg, Germany
| | - Maria Schauer
- Museum für Naturkunde – Leibniz Institute for Evolution and Biodiversity Science, Invalidenstraße 43, 10115 Berlin, Germany
| | - Joseph Doumbia
- ONG EnviSud Guinée, Quartier Kipé T2 commune de Ratoma, 530 BP 558 Conakry, Guinea
| | - Thomas Schmitt
- Zoology III Department of Animal Ecology and Tropical Biology, University of Würzburg, Am Hubland, 97074 Würzburg, Germany
| | - Mark-Oliver Rödel
- Museum für Naturkunde – Leibniz Institute for Evolution and Biodiversity Science, Invalidenstraße 43, 10115 Berlin, Germany
| |
Collapse
|
6
|
Zhao L, Halfwerk W, Cui J. Response to comment on 'Parasite defensive limb movements enhance acoustic signal attraction in male little torrent frogs'. eLife 2023; 12:e90404. [PMID: 37812200 PMCID: PMC10561973 DOI: 10.7554/elife.90404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 09/12/2023] [Indexed: 10/10/2023] Open
Abstract
Recently we showed that limb movements associated with anti-parasite defenses can enhance acoustic signal attraction in male little torrent frogs (Amolops torrentis), which suggests a potential pathway for physical movements to become co-opted into mating displays (Zhao et al., 2022). Anderson et al. argue for alternative explanations of our results and provide a reanalysis of part of our data (Anderson et al., 2023). We acknowledge some of the points raised and provide an additional analysis in support of our hypothesis.
Collapse
Affiliation(s)
- Longhui Zhao
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
- Ministry of Education Key Laboratory for Ecology of Tropical Islands, Key Laboratory of Tropical Animal and Plant Ecology of Hainan Province, College of Life Sciences, Hainan Normal UniversityHaikouChina
| | - Wouter Halfwerk
- Department of Ecological Sciences, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Jianguo Cui
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
| |
Collapse
|
7
|
Lindborg P, Chopra SS, Groß-Vogt K. Editorial: Data perceptualization for climate science communication. Front Psychol 2023; 14:1263971. [PMID: 37637907 PMCID: PMC10457114 DOI: 10.3389/fpsyg.2023.1263971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 08/03/2023] [Indexed: 08/29/2023] Open
Affiliation(s)
- PerMagnus Lindborg
- SoundLab, School of Creative Media, City University of Hong Kong, Kowloon, Hong Kong SAR, China
| | - Shauhrat S. Chopra
- School of Energy and Environment, City University of Hong Kong, Kowloon, Hong Kong SAR, China
| | - Katharina Groß-Vogt
- Institut für Elektronische Musik und Akustik, Kunstuniversität Graz, Graz, Austria
| |
Collapse
|
8
|
Kuyler A, Johnson E, Bornman J. Multimodal communication reported by familiar caregivers to build communication capacity in persons who are minimally conscious. Int J Speech Lang Pathol 2023; 25:523-539. [PMID: 35838322 DOI: 10.1080/17549507.2022.2096926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Limited clinical and research evidence is available to support healthcare practitioners in the communication assessment and intervention of persons who are minimally conscious. This study placed a specific focus on the multimodal communication strategies familiar caregivers of persons who are minimally conscious observed, as well as the verbal and the nonverbal communication strategies they employed to build communication capacity. This may inform clinical practice as it provides valuable autobiographical information as well as familiar stimuli that may elicit responses from persons in a minimally conscious state. METHOD A descriptive qualitative design employing in-depth semi-structured interviews with familiar caregivers was utilised to address the purpose of the study. RESULT Familiar caregivers reported that they used both nonverbal and verbal communication strategies to obtain a response from persons who are minimally conscious. These caregivers also reported that these persons appeared to rely on nonverbal communication strategies to express 36 different communication functions. CONCLUSION Based on the findings of this study, it is clear that caregivers can be beneficial to persons who are minimally conscious, if they are able to observe and capitalise on naturally occurring multimodal communication strategies and functions. This study emphasises that familiar caregivers respect and value the dignity of persons who are minimally conscious and want to improve their communication capacity, but often lack confidence in their own communication skills.
Collapse
Affiliation(s)
- Ariné Kuyler
- Centre for Augmentative and Alternative Communication, University of Pretoria, Hatfield, South Africa
| | - Ensa Johnson
- Centre for Augmentative and Alternative Communication, University of Pretoria, Hatfield, South Africa
| | - Juan Bornman
- Centre for Augmentative and Alternative Communication, University of Pretoria, Hatfield, South Africa
| |
Collapse
|
9
|
de Mouzon C, Leboucher G. Multimodal Communication in the Human-Cat Relationship: A Pilot Study. Animals (Basel) 2023; 13:ani13091528. [PMID: 37174564 PMCID: PMC10177025 DOI: 10.3390/ani13091528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 04/21/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
Across all species, communication implies that an emitter sends signals to a receiver, through one or more channels. Cats can integrate visual and auditory signals sent by humans and modulate their behaviour according to the valence of the emotion perceived. However, the specific patterns and channels governing cat-to-human communication are poorly understood. This study addresses whether, in an extraspecific interaction, cats are sensitive to the communication channel used by their human interlocutor. We examined three types of interactions-vocal, visual, and bimodal-by coding video clips of 12 cats living in cat cafés. In a fourth (control) condition, the human interlocutor refrained from emitting any communication signal. We found that the modality of communication had a significant effect on the latency in the time taken for cats to approach the human experimenter. Cats interacted significantly faster to visual and bimodal communication compared to the "no communication" pattern, as well as to vocal communication. In addition, communication modality had a significant effect on tail-wagging behaviour. Cats displayed significantly more tail wagging when the experimenter engaged in no communication (control condition) compared to visual and bimodal communication modes, indicating that they were less comfortable in this control condition. Cats also displayed more tail wagging in response to vocal communication compared to the bimodal communication. Overall, our data suggest that cats display a marked preference for both visual and bimodal cues addressed by non-familiar humans compared to vocal cues only. Results arising from the present study may serve as a basis for practical recommendations to navigate the codes of human-cat interactions.
Collapse
Affiliation(s)
- Charlotte de Mouzon
- Laboratoire Ethologie Cognition Développement, Université Paris Nanterre, 92000 Nanterre, France
- EthoCat-Cat Behaviour Research and Consulting Institute, 33000 Bordeaux, France
| | - Gérard Leboucher
- Laboratoire Ethologie Cognition Développement, Université Paris Nanterre, 92000 Nanterre, France
| |
Collapse
|
10
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
11
|
Zucca S, Puche AC, Bovetti S. Editorial: The neural circuitry of mating behaviors. Front Neural Circuits 2023; 16:1102051. [PMID: 36685356 PMCID: PMC9853960 DOI: 10.3389/fncir.2022.1102051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 12/26/2022] [Indexed: 01/07/2023] Open
Affiliation(s)
- Stefano Zucca
- Department of Life Sciences and Systems Biology, University of Turin, Turin, Italy
- Neuroscience Institute Cavalieri Ottolenghi, University of Turin, Turin, Italy
| | - Adam C. Puche
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, MA, United States
| | - Serena Bovetti
- Department of Life Sciences and Systems Biology, University of Turin, Turin, Italy
- Neuroscience Institute Cavalieri Ottolenghi, University of Turin, Turin, Italy
| |
Collapse
|
12
|
Zhao L, Wang J, Zhang H, Wang T, Yang Y, Tang Y, Halfwerk W, Cui J. Parasite defensive limb movements enhance acoustic signal attraction in male little torrent frogs. eLife 2022; 11:e76083. [PMID: 35522043 PMCID: PMC9122496 DOI: 10.7554/elife.76083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 05/03/2022] [Indexed: 11/13/2022] Open
Abstract
Many animals rely on complex signals that target multiple senses to attract mates and repel rivals. These multimodal displays can however also attract unintended receivers, which can be an important driver of signal complexity. Despite being taxonomically widespread, we often lack insight into how multimodal signals evolve from unimodal signals and in particular what roles unintended eavesdroppers play. Here, we assess whether the physical movements of parasite defense behavior increase the complexity and attractiveness of an acoustic sexual signal in the little torrent frog (Amolops torrentis). Calling males of this species often display limb movements in order to defend against blood-sucking parasites such as frog-biting midges that eavesdrop on their acoustic signal. Through mate choice tests we show that some of these midge-evoked movements influence female preference for acoustic signals. Our data suggest that midge-induced movements may be incorporated into a sexual display, targeting both hearing and vision in the intended receiver. Females may play an important role in incorporating these multiple components because they prefer signals which combine multiple modalities. Our results thus help to understand the relationship between natural and sexual selection pressure operating on signalers and how in turn this may influence multimodal signal evolution.
Collapse
Affiliation(s)
- Longhui Zhao
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
- Ministry of Education Key Laboratory for Ecology of Tropical Islands, Key Laboratory of Tropical Animal and Plant Ecology of Hainan Province, College of Life Sciences, Hainan Normal UniversityHaikouChina
| | - Jichao Wang
- Ministry of Education Key Laboratory for Ecology of Tropical Islands, Key Laboratory of Tropical Animal and Plant Ecology of Hainan Province, College of Life Sciences, Hainan Normal UniversityHaikouChina
| | - Haodi Zhang
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
| | - Tongliang Wang
- Ministry of Education Key Laboratory for Ecology of Tropical Islands, Key Laboratory of Tropical Animal and Plant Ecology of Hainan Province, College of Life Sciences, Hainan Normal UniversityHaikouChina
| | - Yue Yang
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
| | - Yezhong Tang
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
| | - Wouter Halfwerk
- Department of Ecological Sciences, Vrije Universiteit Amsterdam, De BoelelaanAmsterdamNetherlands
| | - Jianguo Cui
- CAS Key Laboratory of Mountain Ecological Restoration and Bioresource Utilization & Ecological Restoration and Biodiversity Conservation Key Laboratory of Sichuan Province, Chengdu Institute of Biology, Chinese Academy of SciencesChengduChina
| |
Collapse
|
13
|
Pouw W, Proksch S, Drijvers L, Gamba M, Holler J, Kello C, Schaefer RS, Wiggins GA. Multilevel rhythms in multimodal communication. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200334. [PMID: 34420378 PMCID: PMC8380971 DOI: 10.1098/rstb.2020.0334] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2021] [Indexed: 12/16/2022] Open
Abstract
It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Shannon Proksch
- Cognitive and Information Sciences, University of California, Merced, CA, USA
| | - Linda Drijvers
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Turin, Turin, Italy
| | - Judith Holler
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Christopher Kello
- Cognitive and Information Sciences, University of California, Merced, CA, USA
| | - Rebecca S. Schaefer
- Health, Medical and Neuropsychology unit, Institute for Psychology, Leiden University, Leiden, The Netherlands
- Academy for Creative and Performing Arts, Leiden University, Leiden, The Netherlands
| | - Geraint A. Wiggins
- Vrije Universiteit Brussel, Brussels, Belgium and Queen Mary University of London, UK
- Queen Mary University, London, UK
| |
Collapse
|
14
|
Murgiano M, Motamedi Y, Vigliocco G. Situating Language in the Real-World: Authors' Reply to Commentaries. J Cogn 2021; 4:44. [PMID: 34514315 PMCID: PMC8396114 DOI: 10.5334/joc.181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 07/20/2021] [Indexed: 12/03/2022] Open
|
15
|
Murgiano M, Motamedi Y, Vigliocco G. Situating Language in the Real-World: The Role of Multimodal Iconicity and Indexicality. J Cogn 2021; 4:38. [PMID: 34514309 PMCID: PMC8396123 DOI: 10.5334/joc.113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 07/06/2020] [Indexed: 11/30/2022] Open
Abstract
In the last decade, a growing body of work has convincingly demonstrated that languages embed a certain degree of non-arbitrariness (mostly in the form of iconicity, namely the presence of imagistic links between linguistic form and meaning). Most of this previous work has been limited to assessing the degree (and role) of non-arbitrariness in the speech (for spoken languages) or manual components of signs (for sign languages). When approached in this way, non-arbitrariness is acknowledged but still considered to have little presence and purpose, showing a diachronic movement towards more arbitrary forms. However, this perspective is limited as it does not take into account the situated nature of language use in face-to-face interactions, where language comprises categorical components of speech and signs, but also multimodal cues such as prosody, gestures, eye gaze etc. We review work concerning the role of context-dependent iconic and indexical cues in language acquisition and processing to demonstrate the pervasiveness of non-arbitrary multimodal cues in language use and we discuss their function. We then move to argue that the online omnipresence of multimodal non-arbitrary cues supports children and adults in dynamically developing situational models.
Collapse
|
16
|
Nota N, Trujillo JP, Holler J. Facial Signals and Social Actions in Multimodal Face-to-Face Interaction. Brain Sci 2021; 11:1017. [PMID: 34439636 PMCID: PMC8392358 DOI: 10.3390/brainsci11081017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 07/07/2021] [Accepted: 07/26/2021] [Indexed: 01/30/2023] Open
Abstract
In a conversation, recognising the speaker's social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information.
Collapse
Affiliation(s)
- Naomi Nota
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - James P. Trujillo
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Judith Holler
- Donders Institute for Brain, Cognition, and Behaviour, 6525 AJ Nijmegen, The Netherlands; (J.P.T.); (J.H.)
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
17
|
Araya B, Pena P, Leiner M. Developing a health education comic book: the advantages of learning the behaviours of a target audience. J Vis Commun Med 2021; 44:87-96. [PMID: 34044731 DOI: 10.1080/17453054.2021.1924639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The objective of this study was to determine the positive and negative coping mechanisms practiced by parents of paediatric inpatients and outpatients in order to prepare a health educational comic aimed at improving these response mechanisms. Data were collected from parents visiting general paediatric outpatient clinics or hospitalisation units, at a children's hospital in a metropolitan city. Data analysis was based on 258 completed surveys received from 308 (83.77%) respondents. Each parent completed a survey that included the Brief-COPE-Coping Orientation to Problems Experienced questionnaire that encompassed 14 subscales of positive and negative coping mechanisms. Parents used both positive and negative coping mechanisms in outpatient clinics and hospitalisation units. Scores involving negative coping mechanisms were increased and associated with the severity of a child's reason for visiting a children's hospital. The lowest scores were reported by parents whose children were seen at outpatient clinics, whereas the highest scores were reported by parents whose children were treated in critical care units. Learning about parents' coping mechanisms provided key information for preparing an electronic health education comic book (electronically distributed free of charge) and can be used to teach and promote the reinforcement of positive rather than negative coping mechanisms.
Collapse
Affiliation(s)
- Benjamin Araya
- Department of Pediatrics, Texas Tech University Health Science Center, El Paso, TX, USA
| | - Patricia Pena
- Department of Pediatrics, Texas Tech University Health Science Center, El Paso, TX, USA.,School of Medicine, Seattle Children's Hospital, University of Washington, Seattle, WA, USA
| | - Marie Leiner
- Department of Pediatrics, Texas Tech University Health Science Center, El Paso, TX, USA
| |
Collapse
|
18
|
Romero-Diaz C, Xu C, Campos SM, Herrmann MA, Kusumi K, Hews DK, Martins EP. Brain transcriptomic responses of Yarrow's spiny lizard, Sceloporus jarrovii, to conspecific visual or chemical signals. Genes Brain Behav 2021; 20:e12753. [PMID: 34036739 DOI: 10.1111/gbb.12753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 11/24/2022]
Abstract
Species with multimodal communication integrate information from social cues in different modalities into behavioral responses that are mediated by changes in gene expression in the brain. Differences in patterns of gene expression between signal modalities may shed light on the neuromolecular mechanisms underlying multisensory processing. Here, we use RNA-Seq to analyze brain transcriptome responses to either chemical or visual social signals in a territorial lizard with multimodal communication. Using an intruder challenge paradigm, we exposed 18 wild-caught, adult, male Sceloporus jarrovii to either male conspecific scents (femoral gland secretions placed on a small pebble), the species-specific push-up display (a programmed robotic model), or a control (an unscented pebble). We conducted differential expression analysis with both a de novo S. jarrovii transcriptome assembly and the reference genome of a closely related species, Sceloporus undulatus. Despite some inter-individual variation, we found significant differences in gene expression in the brain across signal modalities and the control in both analyses. The most notable differences occurred between chemical and visual stimulus treatments, closely followed by visual stimulus versus the control. Altered expression profiles could explain documented aggression differences in the immediate behavioral response to conspecific signals from different sensory modalities. Shared differentially expressed genes between visually- or chemically-stimulated males are involved in neural activity and neurodevelopment and several other differentially expressed genes in stimulus-challenged males are involved in conserved signal-transduction pathways associated with the social stress response, aggression and the response to territory intruders across vertebrates.
Collapse
Affiliation(s)
| | - Cindy Xu
- School of Life Sciences, Arizona State University, Tempe, Arizona, USA
| | - Stephanie M Campos
- Center for Behavioral Neuroscience, Neuroscience Institute, Georgia State University, Atlanta, Georgia, USA
| | - Morgan A Herrmann
- School of Life Sciences, Arizona State University, Tempe, Arizona, USA
| | - Kenro Kusumi
- School of Life Sciences, Arizona State University, Tempe, Arizona, USA
| | - Diana K Hews
- Department of Biology, Indiana State University, Terre Haute, Indiana, USA
| | - Emília P Martins
- School of Life Sciences, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
19
|
Taniguchi T, Horii T, Hinaut X, Spranger M, Mochihashi D, Nagai T. Editorial: Language and Robotics. Front Robot AI 2021; 8:674832. [PMID: 33912598 PMCID: PMC8072269 DOI: 10.3389/frobt.2021.674832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 03/08/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Tadahiro Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan
| | - Takato Horii
- Graduate School of Engineering Science, Osaka University, Suita, Japan
| | - Xavier Hinaut
- Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt, Île-de-France, France
| | | | - Daichi Mochihashi
- Department of Statistical Inference and Mathematics, The Institute of Statistical Mathematics, Tokyo, Japan
| | - Takayuki Nagai
- Graduate School of Engineering Science, Osaka University, Suita, Japan
| |
Collapse
|
20
|
Hinnell J, Parrill F. Corrigendum: Gesture Influences Resolution of Ambiguous Statements of Neutral and Moral Preferences. Front Psychol 2021; 12:664194. [PMID: 33746866 PMCID: PMC7977709 DOI: 10.3389/fpsyg.2021.664194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 02/05/2021] [Indexed: 12/03/2022] Open
Affiliation(s)
- Jennifer Hinnell
- Department of English Language and Literatures, The University of British Columbia, Vancouver, BC, Canada
| | - Fey Parrill
- Department of Cognitive Science, Case Western Reserve University, Cleveland, OH, United States
| |
Collapse
|
21
|
Arslan Aydin Ü, Kalkan S, Acartürk C. Speech Driven Gaze in a Face-to-Face Interaction. Front Neurorobot 2021; 15:598895. [PMID: 33746729 PMCID: PMC7970197 DOI: 10.3389/fnbot.2021.598895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 01/25/2021] [Indexed: 11/23/2022] Open
Abstract
Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is 2-fold: (i) to investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction; and (ii) to propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely, VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.
Collapse
Affiliation(s)
- Ülkü Arslan Aydin
- Cognitive Science Department, Middle East Technical University, Ankara, Turkey
| | - Sinan Kalkan
- Computer Engineering Department, Middle East Technical University, Ankara, Turkey
| | - Cengiz Acartürk
- Cognitive Science Department, Middle East Technical University, Ankara, Turkey
- Cyber Security Department, Middle East Technical University, Ankara, Turkey
| |
Collapse
|
22
|
Abstract
Beat gestures-spontaneously produced biphasic movements of the hand-are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple 'flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.
Collapse
Affiliation(s)
- Hans Rutger Bosker
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500 AH Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - David Peeters
- Department of Communication and Cognition, TiCC Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
23
|
Hinnell J, Parrill F. Gesture Influences Resolution of Ambiguous Statements of Neutral and Moral Preferences. Front Psychol 2020; 11:587129. [PMID: 33362652 PMCID: PMC7758198 DOI: 10.3389/fpsyg.2020.587129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 11/16/2020] [Indexed: 11/13/2022] Open
Abstract
When faced with an ambiguous pronoun, comprehenders use both multimodal cues (e.g., gestures) and linguistic cues to identify the antecedent. While research has shown that gestures facilitate language comprehension, improve reference tracking, and influence the interpretation of ambiguous pronouns, literature on reference resolution suggests that a wide set of linguistic constraints influences the successful resolution of ambiguous pronouns and that linguistic cues are more powerful than some multimodal cues. To address the outstanding question of the importance of gesture as a cue in reference resolution relative to cues in the speech signal, we have previously investigated the comprehension of contrastive gestures that indexed abstract referents – in this case expressions of personal preference – and found that such gestures did facilitate the resolution of ambiguous statements of preference. In this study, we extend this work to investigate whether the effect of gesture on resolution is diminished when the gesture indexes a statement that is less likely to be interpreted as the correct referent. Participants watched videos in which a speaker contrasted two ideas that were either neutral (e.g., whether to take the train to a ballgame or drive) or moral (e.g., human cloning is (un)acceptable). A gesture to the left or right side co-occurred with speech expressing each position. In gesture-disambiguating trials, an ambiguous phrase (e.g., I agree with that, where that is ambiguous) was accompanied by a gesture to one side or the other. In gesture non-disambiguating trials, no third gesture occurred with the ambiguous phrase. Participants were more likely to choose the idea accompanied by gesture as the stimulus speaker’s preference. We found no effect of scenario type. Regardless of whether the linguistic cue expressed a view that was morally charged or neutral, observers used gesture to understand the speaker’s opinion. This finding contributes to our understanding of the strength and range of cues, both linguistic and multimodal, that listeners use to resolve ambiguous references.
Collapse
Affiliation(s)
- Jennifer Hinnell
- Department of English Language and Literatures, The University of British Columbia, Vancouver, BC, Canada
| | - Fey Parrill
- Department of Cognitive Science, Case Western Reserve University, Cleveland, OH, United States
| |
Collapse
|
24
|
Marentette P, Furman R, Suvanto ME, Nicoladis E. Pantomime (Not Silent Gesture) in Multimodal Communication: Evidence From Children's Narratives. Front Psychol 2020; 11:575952. [PMID: 33329222 PMCID: PMC7734346 DOI: 10.3389/fpsyg.2020.575952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/04/2020] [Indexed: 11/13/2022] Open
Abstract
Pantomime has long been considered distinct from co-speech gesture. It has therefore been argued that pantomime cannot be part of gesture-speech integration. We examine pantomime as distinct from silent gesture, focusing on non-co-speech gestures that occur in the midst of children’s spoken narratives. We propose that gestures with features of pantomime are an infrequent but meaningful component of a multimodal communicative strategy. We examined spontaneous non-co-speech representational gesture production in the narratives of 30 monolingual English-speaking children between the ages of 8- and 11-years. We compared the use of co-speech and non-co-speech gestures in both autobiographical and fictional narratives and examined viewpoint and the use of non-manual articulators, as well as the length of responses and narrative quality. The use of non-co-speech gestures was associated with longer narratives of equal or higher quality than those using only co-speech gestures. Non-co-speech gestures were most likely to adopt character-viewpoint and use non-manual articulators. The present study supports a deeper understanding of the term pantomime and its multimodal use by children in the integration of speech and gesture.
Collapse
Affiliation(s)
| | - Reyhan Furman
- School of Psychology, University of Central Lancashire, Preston, United Kingdom
| | - Marcus E Suvanto
- Center for Studies in Behavioral Neuroscience, Concordia University, Montréal, QC, Canada
| | - Elena Nicoladis
- Department of Psychology, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
25
|
Murillo E, Casla M. Multimodal representational gestures in the transition to multi-word productions. Infancy 2020; 26:104-122. [PMID: 33230946 DOI: 10.1111/infa.12375] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 09/16/2020] [Accepted: 10/26/2020] [Indexed: 12/01/2022]
Abstract
The aim of this study was to analyze the use of representational gestures from a multimodal point of view in the transition from one-word to multi-word constructions. Twenty-one Spanish-speaking children were observed longitudinally at 18, 21, 24, and 30 months of age. We analyzed the production of deictic, symbolic, and conventional gestures and their coordination with different verbal elements. Moreover, we explored the relationship between gestural multimodal and unimodal productions and independent measures of language development. Results showed that gesture production remains stable in the period studied. Whereas deictic gestures are frequent and mostly multimodal from the beginning, conventional gestures are rare and mainly unimodal. Symbolic gestures are initially unimodal, but between 24 and 30 months of age, this pattern reverses, with more multimodal symbolic gestures than unimodal. In addition, the frequency of multimodal representational gestures at specific ages seems to be positively related to independent measures of vocabulary and morphosyntax development. By contrast, the production of unimodal representational gestures appears negatively related to these measures. Our results suggest that multimodal representational gestures could have a facilitating role in the process of learning to combine meanings for communicative goals.
Collapse
Affiliation(s)
- Eva Murillo
- Department of General Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Marta Casla
- Department of Developmental Psychology and Education, Universidad Autónoma de Madrid, Madrid, Spain
| |
Collapse
|
26
|
Avdi E, Amiran K, Baradon T, Broughton C, Sleed M, Spencer R, Shai D. Studying the process of psychoanalytic parent-infant psychotherapy: Embodied and discursive aspects. Infant Ment Health J 2020; 41:589-602. [PMID: 32881006 DOI: 10.1002/imhj.21888] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
This paper presents findings from an intensive, mixed methods case study of one session of psychoanalytic parent-infant psychotherapy (PPIP) addressing early relational trauma, and aims to shed light on the multimodal interactive processes that take place in the moment-to-moment exchanges comprising the therapeutic encounter. Different research methods were used on video material from PPIP sessions, including microanalysis of adult-infant interactions, discourse analysis of talk, and coding systems developed to study parent-infant interaction. These different perspectives were brought together with the clinical narrative to illuminate the complex, dynamic processes of parent-infant-therapist interaction. More specifically, the detailed analysis of one interactive episode revealed brief behavioral manifestations of fearful and disoriented states of mind, reflecting dysregulated interaction between mother and infant, which also powerfully affected the therapist. The processes through which the therapist gradually resolves this rupture are also described in detail. Through this pilot study, we were able to show that it is possible to systematically study the process of PPIP. The study contributes to the growing psychotherapy research literature that takes into account both the verbal domain and implicit, interactional processes in therapeutic practice, and underscores the therapist's comprehensive engagement in the therapeutic process.
Collapse
Affiliation(s)
- Evrinomy Avdi
- Faculty of Philosophy, Aristotle University of Thessaloniki, Thessaloniki, Greece.,Child Attachment and Psychological Therapies Research Unit, Anna Freud National Centre for Children & Families, London, UK
| | - Keren Amiran
- Child Attachment and Psychological Therapies Research Unit, Anna Freud National Centre for Children & Families, London, UK
| | - Tessa Baradon
- Child Attachment and Psychological Therapies Research Unit, Anna Freud National Centre for Children & Families, London, UK.,School of Human and Community Development, University of the Witwatersrand, Johannesburg, South Africa
| | - Carol Broughton
- Child Attachment and Psychological Therapies Research Unit, Anna Freud National Centre for Children & Families, London, UK
| | - Michelle Sleed
- Child Attachment and Psychological Therapies Research Unit, Anna Freud National Centre for Children & Families, London, UK
| | - Rose Spencer
- Coombe Wood Mother and Baby Unit, Central and North West London, NHS, London, UK
| | - Dana Shai
- School of Behavioral Science, The Academic College of Tel Aviv-Yaffo, Tel-Aviv, Israel
| |
Collapse
|
27
|
DE Pablo I, Murillo E, Romero A. The effect of infant-directed speech on early multimodal communicative production in Spanish and Basque. J Child Lang 2020; 47:457-471. [PMID: 31426871 DOI: 10.1017/s0305000919000412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We analyzed the effect of infant-directed speech (IDS) on multimodal communicative production of children at the beginning of the second year of life in two different languages: Spanish and Basque. Twelve Spanish and twelve Basque children aged between 12 and 15 months observed two versions of an audiovisual story: one version was narrated with IDS and the other with adult-directed speech (ADS). We analyzed the use of gaze and the communicative behaviors produced by children. The time spent looking at the story increases in the IDS condition regardless of the language of the narration. Children produced more multimodal communicative behaviors while watching the IDS version both in Spanish and in Basque. These results suggest that IDS increases attention and social engagement promoting joint attention episodes.
Collapse
Affiliation(s)
- Irati DE Pablo
- Departamento de Didáctica de la Lengua y la Literatura, Universidad del País Vasco, Spain
| | - Eva Murillo
- Departamento de Psicología Básica, Universidad Autónoma de Madrid, Spain
| | - Asier Romero
- Departamento de Didáctica de la Lengua y la Literatura, Universidad del País Vasco, Spain
| |
Collapse
|
28
|
Kozak EC, Uetz GW. Male courtship signal modality and female mate preference in the wolf spider Schizocosa ocreata: results of digital multimodal playback studies. Curr Zool 2019; 65:705-711. [PMID: 31857817 PMCID: PMC6911845 DOI: 10.1093/cz/zoz025] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 05/07/2019] [Indexed: 11/14/2022] Open
Abstract
Females must be able to perceive and assess male signals, especially when they occur simultaneously with those of other males. Previous studies show female Schizocosa ocreata wolf spiders display receptivity to isolated visual or vibratory courtship signals, but increased receptivity to multimodal courtship. It is unknown whether this is true when females are presented with a choice between simultaneous multimodal and isolated unimodal male courtship. We used digital playback to present females with a choice simulating simultaneous male courtship in different sensory modes without variation in information content: 1) isolated unimodal visual versus vibratory signals; 2) multimodal versus vibratory signals; and 3) multimodal versus visual signals. When choosing between isolated unimodal signals (visual or vibratory), there were no significant differences in orientation latency and number of orientations, approaches or receptive displays directed to either signal. When given a choice between multimodal versus vibratory-only male courtship signals, females were more likely to orient to the multimodal stimulus, and directed significantly more orients, approaches and receptivity behaviors to the multimodal signal. When presented with a choice between multimodal and visual-only signals, there were significantly more orients and approaches to the multimodal signal, but no significant difference in female receptivity. Results suggest that signal modes are redundant and equivalent in terms of qualitative responses, but when combined, multimodal signals quantitatively enhance detection and/or reception. This study confirms the value of testing preference behavior using a choice paradigm, as female preferences may depend on the context (e.g., environmental context and social context) in which they are presented with male signals.
Collapse
Affiliation(s)
- Elizabeth C Kozak
- Department of Biological Sciences, University of Cincinnati, Cincinnati, OH, USA
| | - George W Uetz
- Department of Biological Sciences, University of Cincinnati, Cincinnati, OH, USA
| |
Collapse
|
29
|
Dellinger M, Zhang W, Bell AM, Hellmann JK. Do male sticklebacks use visual and/or olfactory cues to assess a potential mate's history with predation risk? Anim Behav 2018; 145:151-159. [PMID: 31666748 DOI: 10.1016/j.anbehav.2018.09.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Differential allocation occurs when individuals alter their reproductive investment based on their mate's traits. A previous study showed that male threespine sticklebacks, Gasterosteus aculeatus, reduced courtship towards females that had previously been exposed to predation risk compared to unexposed females. This suggests that males can detect a female's previous history with predation risk, but the mechanisms by which males assess a female's history are unknown. To determine whether males use chemical and/or visual cues to detect a female's previous history with predation risk, we compared rates of courtship behaviour in the presence of visual and/or olfactory cues of predator-exposed females versus unexposed females in a 2×2 factorial design. We found that males differentiate between unexposed and predator-exposed females using visual cues: regardless of the olfactory cues present, males performed fewer zigzags (a conspicuous courtship behaviour) when they were exposed to visual cues from predator-exposed females compared to unexposed females. However, males' response to olfactory cues changed over the course of the experiment: initially, males performed fewer courtship displays when they received olfactory cues of predator-exposed females compared to unexposed females, but they did not discriminate between cues from predator-exposed and unexposed females later in the experiment. A follow-up experiment found that levels of cortisol released by both predator-exposed and unexposed females decreased over the course of the experiment. If cortisol is linked to or correlated with olfactory cues of predation risk that are released by females, then this suggests that the olfactory cues became less potent over the course of the experiment. Altogether, these results suggest that males use both visual and olfactory cues to differentiate between unexposed and predator-exposed females, which may help ensure reliable communication in a noisy environment.
Collapse
Affiliation(s)
- Marion Dellinger
- COMUE Université Bretagne Loire, Oniris, Nantes-Atlantic College of veterinary medicine and food sciences
| | - Weiran Zhang
- Department of Animal Biology, School of Integrative Biology, University of Illinois at Urbana-Champaign
| | - Alison M Bell
- Department of Animal Biology, School of Integrative Biology, University of Illinois at Urbana-Champaign.,Carl R. Woese Institute for Genomic Biology, University of Illinois at Urbana Champaign.,Neuroscience Program, University of Illinois at Urbana Champaign.,Program in Ecology, Evolution and Conservation, University of Illinois at Urbana Champaign
| | - Jennifer K Hellmann
- Department of Animal Biology, School of Integrative Biology, University of Illinois at Urbana-Champaign
| |
Collapse
|
30
|
Jhang Y, Franklin B, Ramsdell-Hudock HL, Oller DK. Differing Roles of the Face and Voice in Early Human Communication: Roots of Language in Multimodal Expression. Front Commun (Lausanne) 2017; 2:10. [PMID: 29423398 PMCID: PMC5798486 DOI: 10.3389/fcomm.2017.00010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Seeking roots of language, we probed infant facial expressions and vocalizations. Both have roles in language, but the voice plays an especially flexible role, expressing a variety of functions and affect conditions with the same vocal categories-a word can be produced with many different affective flavors. This requirement of language is seen in very early infant vocalizations. We examined the extent to which affect is transmitted by early vocal categories termed "protophones" (squeals, vowel-like sounds, and growls) and by their co-occurring facial expressions, and similarly the extent to which vocal type is transmitted by the voice and co-occurring facial expressions. Our coder agreement data suggest infant affect during protophones was most reliably transmitted by the face (judged in video-only), while vocal type was transmitted most reliably by the voice (judged in audio-only). Voice alone transmitted negative affect more reliably than neutral or positive affect, suggesting infant protophones may be used especially to call for attention when the infant is in distress. By contrast, the face alone provided no significant information about protophone categories. Indeed coders in VID could scarcely recognize the difference between silence and voice when coding protophones in VID. The results suggest that partial decoupling of communicative roles for face and voice occurs even in the first months of life. Affect in infancy appears to be transmitted in a way that audio and video aspects are flexibly interwoven, as in mature language.
Collapse
Affiliation(s)
- Yuna Jhang
- Department of Speech Language Pathology and Audiology, Chung Shan University, Taichung, Taiwan
| | - Beau Franklin
- The Institute for Research and Rehabilitation, Memorial Hermann Healthcare, Houston, TX, United States
| | - Heather L. Ramsdell-Hudock
- Department of Communication Sciences and Disorders, Idaho State University, Pocatello, ID, United States
| | - D. Kimbrough Oller
- School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
- Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria
- Institute for Intelligent Systems, The University of Memphis, Memphis, TN, United States
| |
Collapse
|
31
|
Elias-Costa AJ, Montesinos R, Grant T, Faivovich J. The vocal sac of Hylodidae (Amphibia, Anura): Phylogenetic and functional implications of a unique morphology. J Morphol 2017; 278:1506-1516. [PMID: 28744917 DOI: 10.1002/jmor.20727] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 05/16/2017] [Accepted: 06/23/2017] [Indexed: 11/11/2022]
Abstract
Anuran vocal sacs are elastic chambers that recycle exhaled air during vocalizations and are present in males of most species of frogs. Most knowledge of the diversity of vocal sacs relates to external morphology; detailed information on internal anatomy is available for few groups of frogs. Frogs of the family Hylodidae, which is endemic to the Atlantic Forest of Brazil and adjacent Argentina and Paraguay, have three patterns of vocal sac morphology-that is, single, subgular; paired, lateral; and absent. The submandibular musculature and structure of the vocal sac mucosa (the internal wall of the vocal sac) of exemplar species of this family and relatives were studied. In contrast to previous accounts, we found that all species of Crossodactylus and Hylodes possess paired, lateral vocal sacs, with the internal mucosa of each sac being separate from the contralateral one. Unlike all other frogs for which data are available, the mucosa of the vocal sacs in these genera is not supported externally by the mm. intermandibularis and interhyoideus. Rather, the vocal sac mucosa projects through the musculature and is free in the submandibular lymphatic sac. The presence of paired, lateral vocal sacs, the internal separation of the sac mucosae, and their projection through the m. interhyoideus are synapomorphies of the family. Furthermore, the specific configuration of the m. interhyoideus allows asymmetric inflation of paired vocal sacs, a feature only reported in species of these diurnal, stream-dwelling frogs.
Collapse
Affiliation(s)
- Agustin J Elias-Costa
- División Herpetología, Museo Argentino de Ciencias Naturales "Bernardino Rivadavia"-CONICET, Av. Angel Gallardo 470, Buenos Aires, C1405DJR, Argentina
| | - Rachel Montesinos
- Departamento de Zoologia, Instituto de Biociências, Universidade de São Paulo. Rua do Matão, Travessa 14, 321, Cidade Universitária, CEP 05508-090, São Paulo, SP, Brazil
| | - Taran Grant
- Departamento de Zoologia, Instituto de Biociências, Universidade de São Paulo. Rua do Matão, Travessa 14, 321, Cidade Universitária, CEP 05508-090, São Paulo, SP, Brazil.,Museu de Zoologia, Universidade de São Paulo, Av. Nazaré, 481, Ipiranga, CEP 04263-000, São Paulo, SP, Brazil
| | - Julián Faivovich
- División Herpetología, Museo Argentino de Ciencias Naturales "Bernardino Rivadavia"-CONICET, Av. Angel Gallardo 470, Buenos Aires, C1405DJR, Argentina
| |
Collapse
|
32
|
Cartwright E, Clegg AL. Peaches for Lunch: Creating and Using Visual Variables. Med Anthropol 2017; 36:519-532. [PMID: 28448161 DOI: 10.1080/01459740.2017.1321643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
In this article, I describe the process of systematically including nonverbal data in medical anthropology research. I demonstrate the process of visualizing and coding videotaped moments of life and show how we can analyze what is being done along with what is being said. I ground my discussion in toddler language socialization and then expand my observations to the realm of language pathologies. Aphasia from strokes, speech difficulties in neurologically based illnesses like Lou Gehrig's disease, and the variety of communication challenges that face those on the autism spectrum can all be studied in interesting ways by including precise descriptions of nonverbal actions. I discuss the process of recording and coding the data with the software Observer XT 11.5 by Noldus. This method of collecting and analyzing video data can be used for many anthropological questions, in addition to those concerned with communication.
Collapse
Affiliation(s)
| | - Adam LaVar Clegg
- a Department of Anthropology , Idaho State University , Pocatello , Idaho , USA
| |
Collapse
|
33
|
Pitcher BJ, Briefer EF, Baciadonna L, McElligott AG. Cross-modal recognition of familiar conspecifics in goats. R Soc Open Sci 2017; 4:160346. [PMID: 28386412 PMCID: PMC5367292 DOI: 10.1098/rsos.160346] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 01/13/2017] [Indexed: 05/13/2023]
Abstract
When identifying other individuals, animals may match current cues with stored information about that individual from the same sensory modality. Animals may also be able to combine current information with previously acquired information from other sensory modalities, indicating that they possess complex cognitive templates of individuals that are independent of modality. We investigated whether goats (Capra hircus) possess cross-modal representations (auditory-visual) of conspecifics. We presented subjects with recorded conspecific calls broadcast equidistant between two individuals, one of which was the caller. We found that, when presented with a stablemate and another herd member, goats looked towards the caller sooner and for longer than the non-caller, regardless of caller identity. By contrast, when choosing between two herd members, other than their stablemate, goats did not show a preference to look towards the caller. Goats show cross-modal recognition of close social partners, but not of less familiar herd members. Goats may employ inferential reasoning when identifying conspecifics, potentially facilitating individual identification based on incomplete information. Understanding the prevalence of cross-modal recognition and the degree to which different sensory modalities are integrated provides insight into how animals learn about other individuals, and the evolution of animal communication.
Collapse
Affiliation(s)
- Benjamin J. Pitcher
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
- Department of Biological Sciences, Faculty of Science and Engineering, Macquarie University, Sydney 2109 New South Wales, Australia
| | - Elodie F. Briefer
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
- Institute of Agricultural Sciences, ETH Zürich, Universitätstrasse 2, 8092 Zurich, Switzerland
| | - Luigi Baciadonna
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| | - Alan G. McElligott
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| |
Collapse
|
34
|
Rhebergen F, Taylor RC, Ryan MJ, Page RA, Halfwerk W. Multimodal cues improve prey localization under complex environmental conditions. Proc Biol Sci 2016; 282:rspb.2015.1403. [PMID: 26336176 DOI: 10.1098/rspb.2015.1403] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Predators often eavesdrop on sexual displays of their prey. These displays can provide multimodal cues that aid predators, but the benefits in attending to them should depend on the environmental sensory conditions under which they forage. We assessed whether bats hunting for frogs use multimodal cues to locate their prey and whether their use varies with ambient conditions. We used a robotic set-up mimicking the sexual display of a male túngara frog (Physalaemus pustulosus) to test prey assessment by fringe-lipped bats (Trachops cirrhosus). These predatory bats primarily use sound of the frog's call to find their prey, but the bats also use echolocation cues returning from the frog's dynamically moving vocal sac. In the first experiment, we show that multimodal cues affect attack behaviour: bats made narrower flank attack angles on multimodal trials compared with unimodal trials during which they could only rely on the sound of the frog. In the second experiment, we explored the bat's use of prey cues in an acoustically more complex environment. Túngara frogs often form mixed-species choruses with other frogs, including the hourglass frog (Dendropsophus ebraccatus). Using a multi-speaker set-up, we tested bat approaches and attacks on the robofrog under three different levels of acoustic complexity: no calling D. ebraccatus males, two calling D. ebraccatus males and five D. ebraccatus males. We found that bats are more directional in their approach to the robofrog when more D. ebraccatus males were calling. Thus, bats seemed to benefit more from multimodal cues when confronted with increased levels of acoustic complexity in their foraging environments. Our data have important consequences for our understanding of the evolution of multimodal sexual displays as they reveal how environmental conditions can alter the natural selection pressures acting on them.
Collapse
Affiliation(s)
- F Rhebergen
- Behavioral Biology, Institute of Biology (IBL), Leiden University, PO Box 9516, Leiden 2300 RA, The Netherlands
| | - R C Taylor
- Department of Biology, Salisbury University, Salisbury, MD 21801, USA Smithsonian Tropical Research Institute, Apartado 0843-03092 Balboa, Ancón, Republic of Panama
| | - M J Ryan
- Smithsonian Tropical Research Institute, Apartado 0843-03092 Balboa, Ancón, Republic of Panama Department of Integrative Biology, University of Texas, Austin, TX 78712, USA
| | - R A Page
- Smithsonian Tropical Research Institute, Apartado 0843-03092 Balboa, Ancón, Republic of Panama
| | - W Halfwerk
- Smithsonian Tropical Research Institute, Apartado 0843-03092 Balboa, Ancón, Republic of Panama Department of Integrative Biology, University of Texas, Austin, TX 78712, USA
| |
Collapse
|
35
|
Abstract
The world is a noisy place, and animals have evolved a myriad of strategies to communicate in it. Animal communication signals are, however, often multimodal; their components can be processed by multiple sensory systems, and noise can thus affect signal components across different modalities. We studied the effect of environmental noise on multimodal communication in the túngara frog (Physalaemus pustulosus). Males communicate with rivals using airborne sounds combined with call-induced water ripples. We tested males under control as well as noisy conditions in which we mimicked rain- and wind-induced vibrations on the water surface. Males responded more strongly to a multimodal playback in which sound and ripples were combined, compared to a unimodal sound-only playback, but only in the absence of rain and wind. Under windy conditions, males decreased their response to the multimodal playback, suggesting that wind noise interferes with the detection of rival ripples. Under rainy conditions, males increased their response, irrespective of signal playback, suggesting that different noise sources can have different impacts on communication. Our findings show that noise in an additional sensory channel can affect multimodal signal perception and thereby drive signal evolution, but not always in the expected direction.
Collapse
|
36
|
Depowski N, Abaya H, Oghalai J, Bortfeld H. Modality use in joint attention between hearing parents and deaf children. Front Psychol 2015; 6:1556. [PMID: 26528214 PMCID: PMC4600903 DOI: 10.3389/fpsyg.2015.01556] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2015] [Accepted: 09/25/2015] [Indexed: 11/13/2022] Open
Abstract
The present study examined differences in modality use during episodes of joint attention between hearing parent-hearing child dyads and hearing parent-deaf child dyads. Hearing children were age-matched to deaf children. Dyads were video recorded in a free play session with analyses focused on uni- and multimodality use during joint attention episodes. Results revealed that adults in hearing parent-deaf child dyads spent a significantly greater proportion of time interacting with their children using multiple communicative modalities than adults in hearing parent-hearing child dyads, who tended to use the auditory modality (e.g., oral language) most often. While these findings demonstrate that hearing parents accommodate their children's hearing status, we observed greater overall time spent in joint attention in hearing parent-hearing child dyads than hearing parent-deaf child dyads. Our results point to important avenues for future research on how parents can better accommodate their child's hearing status through the use of multimodal communication strategies.
Collapse
Affiliation(s)
- Nicole Depowski
- Department of Psychology, University of Connecticut, Storrs CT, USA
| | - Homer Abaya
- Head and Neck Surgery, Department of Otolaryngology, Stanford University School of Medicine, Stanford CA, USA
| | - John Oghalai
- Head and Neck Surgery, Department of Otolaryngology, Stanford University School of Medicine, Stanford CA, USA
| | - Heather Bortfeld
- Psychological Sciences, University of California, Merced Merced, CA, USA
| |
Collapse
|
37
|
Abstract
A fundamental characteristic of human language is multimodality. In other words, humans use multiple signaling channels concurrently when communicating with one another. For example, people frequently produce manual gestures while speaking, and the words a person perceives are impacted by visual information. For this study, we hypothesized that similar to the way that humans regularly couple their spoken utterances with gestures and facial expressions, chimpanzees regularly produce vocalizations in conjunction with other communicative signals. To test this hypothesis, data were collected from 101 captive chimpanzees living in mixed-sex social groupings of seven to twelve individuals. A total of 2,869 vocal events were collected. The data indicate that approximately 50% of the vocal events were produced in conjunction with another communicative modality. In addition, approximately 68% were directed to a specific individual, and these directed vocalizations were more likely to include a signal from another communicative modality than were vocalizations that were not directed to a specific individual. These results suggest that, like humans, chimpanzees often pair their vocalizations with signals from other communicative modalities. In addition, chimpanzees appear to use their communicative signals strategically to meet specific socio-communicative ends, providing support for the growing literature that indicates that at least some chimpanzee vocal signaling is intentional.
Collapse
Affiliation(s)
- Jared P Taglialatela
- Department of Ecology, Evolution, and Organismal Biology, Kennesaw State University, Kennesaw, Georgia.,Division of Developmental and Cognitive Neuroscience, Yerkes National Primate Research Center, Atlanta, Georgia
| | - Jamie L Russell
- Division of Developmental and Cognitive Neuroscience, Yerkes National Primate Research Center, Atlanta, Georgia.,Neuroscience Institute and Language Research Center, Georgia State University, Atlanta, Georgia
| | - Sarah M Pope
- Neuroscience Institute and Language Research Center, Georgia State University, Atlanta, Georgia
| | - Tamara Morton
- Department of Ecology, Evolution, and Organismal Biology, Kennesaw State University, Kennesaw, Georgia
| | - Stephanie Bogart
- Neuroscience Institute and Language Research Center, Georgia State University, Atlanta, Georgia
| | - Lisa A Reamer
- Department of Veterinary Sciences, Michale E. Keeling Center for Comparative Medicine and Research, The University of Texas MD Anderson Cancer Center, Bastrop, Texas
| | - Steven J Schapiro
- Department of Veterinary Sciences, Michale E. Keeling Center for Comparative Medicine and Research, The University of Texas MD Anderson Cancer Center, Bastrop, Texas
| | - William D Hopkins
- Division of Developmental and Cognitive Neuroscience, Yerkes National Primate Research Center, Atlanta, Georgia.,Neuroscience Institute and Language Research Center, Georgia State University, Atlanta, Georgia.,Department of Veterinary Sciences, Michale E. Keeling Center for Comparative Medicine and Research, The University of Texas MD Anderson Cancer Center, Bastrop, Texas
| |
Collapse
|
38
|
Abstract
Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g., verbal and denotative signals) are well suited to transfer conceptual information, emotional components (e.g., non-verbal signals that are difficult to manipulate voluntarily) likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by non-verbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals.
Collapse
Affiliation(s)
- Marc Mehu
- Department of Psychology, Webster Vienna Private University , Vienna, Austria
| |
Collapse
|
39
|
Waterhouse E, Watts R, Bläsing BE. Doing Duo - a case study of entrainment in William Forsythe's choreography "Duo". Front Hum Neurosci 2014; 8:812. [PMID: 25374522 PMCID: PMC4204438 DOI: 10.3389/fnhum.2014.00812] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Accepted: 09/23/2014] [Indexed: 11/13/2022] Open
Abstract
Entrainment theory focuses on processes in which interacting (i.e., coupled) rhythmic systems stabilize, producing synchronization in the ideal sense, and forms of phase related rhythmic coordination in complex cases. In human action, entrainment involves spatiotemporal and social aspects, characterizing the meaningful activities of music, dance, and communication. How can the phenomenon of human entrainment be meaningfully studied in complex situations such as dance? We present an in-progress case study of entrainment in William Forsythe's choreography Duo, a duet in which coordinated rhythmic activity is achieved without an external musical beat and without touch-based interaction. Using concepts of entrainment from different disciplines as well as insight from Duo performer Riley Watts, we question definitions of entrainment in the context of dance. The functions of chorusing, turn-taking, complementary action, cues, and alignments are discussed and linked to supporting annotated video material. While Duo challenges the definition of entrainment in dance as coordinated response to an external musical or rhythmic signal, it supports the definition of entrainment as coordinated interplay of motion and sound production by active agents (i.e., dancers) in the field. Agreeing that human entrainment should be studied on multiple levels, we suggest that entrainment between the dancers in Duo is elastic in time and propose how to test this hypothesis empirically. We do not claim that our proposed model of elasticity is applicable to all forms of human entrainment nor to all examples of entrainment in dance. Rather, we suggest studying higher order phase correction (the stabilizing tendency of entrainment) as a potential aspect to be incorporated into other models.
Collapse
Affiliation(s)
| | | | - Bettina E Bläsing
- Faculty of Psychology and Sport Science, Neurocognition and Action - Research Group, Bielefeld University Bielefeld, Germany ; Center of Excellence - Cognitive Interaction Technology, Bielefeld University Bielefeld, Germany
| |
Collapse
|