1
|
Déaux EC, Piette T, Gaunet F, Legou T, Arnal L, Giraud AL. Dog-human vocal interactions match dogs' sensory-motor tuning. PLoS Biol 2024; 22:e3002789. [PMID: 39352912 PMCID: PMC11444399 DOI: 10.1371/journal.pbio.3002789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 08/06/2024] [Indexed: 10/04/2024] Open
Abstract
Within species, vocal and auditory systems presumably coevolved to converge on a critical temporal acoustic structure that can be best produced and perceived. While dogs cannot produce articulated sounds, they respond to speech, raising the question as to whether this heterospecific receptive ability could be shaped by exposure to speech or remains bounded by their own sensorimotor capacity. Using acoustic analyses of dog vocalisations, we show that their main production rhythm is slower than the dominant (syllabic) speech rate, and that human-dog-directed speech falls halfway in between. Comparative exploration of neural (electroencephalography) and behavioural responses to speech reveals that comprehension in dogs relies on a slower speech rhythm tracking (delta) than humans' (theta), even though dogs are equally sensitive to speech content and prosody. Thus, the dog audio-motor tuning differs from humans', and we hypothesise that humans may adjust their speech rate to this shared temporal channel as means to improve communication efficacy.
Collapse
Affiliation(s)
- Eloïse C. Déaux
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Théophane Piette
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Florence Gaunet
- Aix-Marseille University and CNRS, Laboratoire de Psychologie Cognitive (UMR 7290), Marseille, France
| | - Thierry Legou
- Aix Marseille University and CNRS, Laboratoire Parole et Langage (UMR 6057), Aix-en-Provence, France
| | - Luc Arnal
- Université Paris Cité, Institut Pasteur, AP-HP, Inserm, Fondation Pour l’Audition, Institut de l’Audition, IHU reConnect, F-75012 Paris, France
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Université Paris Cité, Institut Pasteur, AP-HP, Inserm, Fondation Pour l’Audition, Institut de l’Audition, IHU reConnect, F-75012 Paris, France
| |
Collapse
|
2
|
McGrath N, Phillips CJC, Burman OHP, Dwyer CM, Henning J. Humans can identify reward-related call types of chickens. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231284. [PMID: 38179075 PMCID: PMC10762433 DOI: 10.1098/rsos.231284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 11/13/2023] [Indexed: 01/06/2024]
Abstract
Humans can decode emotional information from vocalizations of animals. However, little is known if these interpretations relate to the ability of humans to identify if calls were made in a rewarded or non-rewarded context. We tested whether humans could identify calls made by chickens (Gallus gallus) in these contexts, and whether demographic factors or experience with chickens affected their correct identification context and the ratings of perceived positive and negative emotions (valence) and excitement (arousal) of chickens. Participants (n = 194) listened to eight calls when chickens were anticipating a reward, and eight calls in non-rewarded contexts, and indicated whether the vocalizing chicken was experiencing pleasure/displeasure, and high/low excitement, using visual analogue scales. Sixty-nine per cent of participants correctly assigned reward and non-reward calls to their respective categories. Participants performed better at categorizing reward-related calls, with 71% of reward calls classified correctly, compared with 67% of non-reward calls. Older people were less accurate in context identification. Older people's ratings of the excitement or arousal levels of reward-related calls were higher than younger people's ratings, while older people rated non-reward calls as representing higher positive emotions or pleasure (higher valence) compared to ratings made by younger people. Our study strengthens evidence that humans perceive emotions across different taxa, and that specific acoustic cues may embody a homologous signalling system among vertebrates. Importantly, humans could identify reward-related calls, and this ability could enhance the management of farmed chickens to improve their welfare.
Collapse
Affiliation(s)
- Nicky McGrath
- School of Veterinary Sciences, University of Queensland, Gatton, Queensland 4343, Australia
| | - Clive J. C. Phillips
- Institute of Veterinary Medicine and Animal Science, Estonia University of Life Sciences, Tartu, Estonia
- Curtin University Sustainable Policy (CUSP) Institute, Kent Street, Bentley, Western Australia 6102, Australia
| | - Oliver H. P. Burman
- School of Life Sciences, University of Lincoln, Brayford Pool, Lincoln, Lincolnshire LN6 7TS, UK
| | - Cathy M. Dwyer
- Scotland's Rural College (SRUC), Peter Wilson Building, Kings Buildings, West Mains Road, Edinburgh EH9 3JG, UK
| | - Joerg Henning
- School of Veterinary Sciences, University of Queensland, Gatton, Queensland 4343, Australia
| |
Collapse
|
3
|
Ceravolo L, Debracque C, Pool E, Gruber T, Grandjean D. Frontal mechanisms underlying primate calls recognition by humans. Cereb Cortex Commun 2023; 4:tgad019. [PMID: 38025828 PMCID: PMC10661312 DOI: 10.1093/texcom/tgad019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 10/18/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The ability to process verbal language seems unique to humans and relies not only on semantics but on other forms of communication such as affective vocalizations, that we share with other primate species-particularly great apes (Hominidae). Methods To better understand these processes at the behavioral and brain level, we asked human participants to categorize vocalizations of four primate species including human, great apes (chimpanzee and bonobo), and monkey (rhesus macaque) during MRI acquisition. Results Classification was above chance level for all species but bonobo vocalizations. Imaging analyses were computed using a participant-specific, trial-by-trial fitted probability categorization value in a model-based style of data analysis. Model-based analyses revealed the implication of the bilateral orbitofrontal cortex and inferior frontal gyrus pars triangularis (IFGtri) respectively correlating and anti-correlating with the fitted probability of accurate species classification. Further conjunction analyses revealed enhanced activity in a sub-area of the left IFGtri specifically for the accurate classification of chimpanzee calls compared to human voices. Discussion Our data-that are controlled for acoustic variability between species-therefore reveal distinct frontal mechanisms that shed light on how the human brain evolved to process vocal signals.
Collapse
Affiliation(s)
- Leonardo Ceravolo
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| | - Coralie Debracque
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| | - Eva Pool
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
- E3 Lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
| | - Thibaud Gruber
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
- eccePAN lab, Department of Psychology and Educational Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| |
Collapse
|
4
|
Thévenet J, Papet L, Coureaud G, Boyer N, Levréro F, Grimault N, Mathevon N. Crocodile perception of distress in hominid baby cries. Proc Biol Sci 2023; 290:20230201. [PMID: 37554035 PMCID: PMC10410202 DOI: 10.1098/rspb.2023.0201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
It is generally argued that distress vocalizations, a common modality for alerting conspecifics across a wide range of terrestrial vertebrates, share acoustic features that allow heterospecific communication. Yet studies suggest that the acoustic traits used to decode distress may vary between species, leading to decoding errors. Here we found through playback experiments that Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and human), and that the intensity of crocodile response depends critically on a set of specific acoustic features (mainly deterministic chaos, harmonicity and spectral prominences). Our results suggest that crocodiles are sensitive to the degree of distress encoded in the vocalizations of phylogenetically very distant vertebrates. A comparison of these results with those obtained with human subjects confronted with the same stimuli further indicates that crocodiles and humans use different acoustic criteria to assess the distress encoded in infant cries. Interestingly, the acoustic features driving crocodile reaction are likely to be more reliable markers of distress than those used by humans. These results highlight that the acoustic features encoding information in vertebrate sound signals are not necessarily identical across species.
Collapse
Affiliation(s)
- Julie Thévenet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Léo Papet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Gérard Coureaud
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Boyer
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Grimault
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Institut universitaire de France, Paris, Île-de-France, France
| |
Collapse
|
5
|
Debracque C, Slocombe KE, Clay Z, Grandjean D, Gruber T. Humans recognize affective cues in primate vocalizations: acoustic and phylogenetic perspectives. Sci Rep 2023; 13:10900. [PMID: 37407601 DOI: 10.1038/s41598-023-37558-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/23/2023] [Indexed: 07/07/2023] Open
Abstract
Humans are adept at extracting affective information from vocalizations of humans and other animals. However, the extent to which human recognition of vocal affective cues of other species is due to cross-taxa similarities in acoustic parameters or the phylogenetic closeness between species is currently unclear. To address this, we first analyzed acoustic variation in 96 affective vocalizations, taken from agonistic and affiliative contexts, of humans and three other primates-rhesus macaques (Macaca mulatta), chimpanzees and bonobos (Pan troglodytes and Pan paniscus). Acoustic analyses revealed that agonistic chimpanzee and bonobo vocalizations were similarly distant from agonistic human voices, but chimpanzee affiliative vocalizations were significantly closer to human affiliative vocalizations, than those of bonobos, indicating a potential derived vocal evolution in the bonobo lineage. Second, we asked 68 human participants to categorize and also discriminate vocalizations based on their presumed affective content. Results showed that participants reliably categorized human and chimpanzee vocalizations according to affective content, but not bonobo threat vocalizations nor any macaque vocalizations. Participants discriminated all species calls above chance level except for threat calls by bonobos and macaques. Our results highlight the importance of both phylogenetic and acoustic parameter level explanations in cross-species affective perception, drawing a more complex picture to the origin of vocal emotions.
Collapse
Affiliation(s)
- C Debracque
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences (CISA), Campus Biotech, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland.
| | - K E Slocombe
- Department of Psychology, University of York, York, UK
| | - Z Clay
- Department of Psychology, Durham University, Durham, UK
| | - D Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences (CISA), Campus Biotech, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland
| | - T Gruber
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences (CISA), Campus Biotech, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland
| |
Collapse
|
6
|
Schwartz JW, Gouzoules H. Humans read emotional arousal in monkey vocalizations: evidence for evolutionary continuities in communication. PeerJ 2022; 10:e14471. [PMID: 36518288 PMCID: PMC9744152 DOI: 10.7717/peerj.14471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 11/06/2022] [Indexed: 12/05/2022] Open
Abstract
Humans and other mammalian species communicate emotions in ways that reflect evolutionary conservation and continuity, an observation first made by Darwin. One approach to testing this hypothesis has been to assess the capacity to perceive the emotional content of the vocalizations of other species. Using a binary forced choice task, we tested perception of the emotional intensity represented in coos and screams of infant and juvenile female rhesus macaques (Macaca mulatta) by 113 human listeners without, and 12 listeners with, experience (as researchers or care technicians) with this species. Each stimulus pair contained one high- and one low-arousal vocalization, as measured at the time of recording by stress hormone levels for coos and the degree of intensity of aggression for screams. For coos as well as screams, both inexperienced and experienced participants accurately identified the high-arousal vocalization at significantly above-chance rates. Experience was associated with significantly greater accuracy with scream stimuli but not coo stimuli, and with a tendency to indicate screams as reflecting greater emotional intensity than coos. Neither measures of empathy, human emotion recognition, nor attitudes toward animal welfare showed any relationship with responses. Participants were sensitive to the fundamental frequency, noisiness, and duration of vocalizations; some of these tendencies likely facilitated accurate perceptions, perhaps due to evolutionary homologies in the physiology of arousal and vocal production between humans and macaques. Overall, our findings support a view of evolutionary continuity in emotional vocal communication. We discuss hypotheses about how distinctive dimensions of human nonverbal communication, like the expansion of scream usage across a range of contexts, might influence perceptions of other species' vocalizations.
Collapse
Affiliation(s)
- Jay W. Schwartz
- Department of Psychology, Emory University, Atlanta, GA, United States,Psychological Sciences Department, Western Oregon University, Monmouth, OR, United States
| | - Harold Gouzoules
- Department of Psychology, Emory University, Atlanta, GA, United States
| |
Collapse
|
7
|
Greenall JS, Cornu L, Maigrot AL, de la Torre MP, Briefer EF. Age, empathy, familiarity, domestication and call features enhance human perception of animal emotion expressions. ROYAL SOCIETY OPEN SCIENCE 2022; 9:221138. [PMID: 36483756 PMCID: PMC9727503 DOI: 10.1098/rsos.221138] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/14/2022] [Indexed: 06/17/2023]
Abstract
Vocalizations constitute an effective way to communicate both emotional arousal (bodily activation) and valence (negative/positive). There is strong evidence suggesting that the convergence of vocal expression of emotional arousal among animal species occurs, hence enabling cross-species perception of arousal, but it is not clear if the same is true for emotional valence. Here, we conducted a large online survey to test the ability of humans to perceive emotions in the contact calls of several wild and domestic ungulates produced in situations of known emotional arousal (previously validated using either heart rate or locomotion) and valence (validated based on the context of production and behavioural indicators of emotions). Participants (1024 respondents from 48 countries) were able to rate above chance levels the arousal level of vocalizations of three of the six ungulate species and the valence of four of them. Percentages of correct ratings did not differ a lot across species for arousal (49-59%), while they showed much more variation for valence (33-68%). Interestingly, several factors such as age, empathy, familiarity and specific features of the calls enhanced these scores. These findings suggest the existence of a shared emotional system across mammalian species, which is much more pronounced for arousal than valence.
Collapse
Affiliation(s)
- Jasmin Sowerby Greenall
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092 Zurich, Switzerland
| | - Lydia Cornu
- Behavioural Ecology Group, Section for Ecology & Evolution, Department of Biology, University of Copenhagen, 2100 Copenhagen Ø, Denmark
- Wildlife Ecology & Conservation Group, Wageningen University and Research, 6708PB Wageningen, The Netherlands
| | - Anne-Laure Maigrot
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092 Zurich, Switzerland
- Swiss National Stud Farm, Agroscope, Les Longs-Prés, 1580 Avenches, Switzerland
| | | | - Elodie F. Briefer
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092 Zurich, Switzerland
- Behavioural Ecology Group, Section for Ecology & Evolution, Department of Biology, University of Copenhagen, 2100 Copenhagen Ø, Denmark
| |
Collapse
|
8
|
Radespiel U, Scheumann M. Introduction to the Special Issue Celebrating the Life and Work of Elke Zimmermann. INT J PRIMATOL 2022. [DOI: 10.1007/s10764-022-00307-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
9
|
Maigrot AL, Hillmann E, Briefer EF. Cross-species discrimination of vocal expression of emotional valence by Equidae and Suidae. BMC Biol 2022; 20:106. [PMID: 35606806 PMCID: PMC9128205 DOI: 10.1186/s12915-022-01311-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 04/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Discrimination and perception of emotion expression regulate interactions between conspecifics and can lead to emotional contagion (state matching between producer and receiver) or to more complex forms of empathy (e.g., sympathetic concern). Empathy processes are enhanced by familiarity and physical similarity between partners. Since heterospecifics can also be familiar with each other to some extent, discrimination/perception of emotions and, as a result, emotional contagion could also occur between species. RESULTS Here, we investigated if four species belonging to two ungulate Families, Equidae (domestic and Przewalski's horses) and Suidae (pigs and wild boars), can discriminate between vocalizations of opposite emotional valence (positive or negative), produced not only by conspecifics, but also closely related heterospecifics and humans. To this aim, we played back to individuals of these four species, which were all habituated to humans, vocalizations from a unique set of recordings for which the valence associated with vocal production was known. We found that domestic and Przewalski's horses, as well as pigs, but not wild boars, reacted more strongly when the first vocalization played was negative compared to positive, regardless of the species broadcasted. CONCLUSIONS Domestic horses, Przewalski's horses and pigs thus seem to discriminate between positive and negative vocalizations produced not only by conspecifics, but also by heterospecifics, including humans. In addition, we found an absence of difference between the strength of reaction of the four species to the calls of conspecifics and closely related heterospecifics, which could be related to similarities in the general structure of their vocalization. Overall, our results suggest that phylogeny and domestication have played a role in cross-species discrimination/perception of emotions.
Collapse
Affiliation(s)
- Anne-Laure Maigrot
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092, Zurich, Switzerland.,Division of Animal Welfare, Veterinary Public Health Institute, Vetsuisse Faculty, University of Bern, Länggassstrasse 120, 3012, Bern, Switzerland.,Swiss National Stud Farm, Agroscope, Les Longs-Prés, 1580, Avenches, Switzerland
| | - Edna Hillmann
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092, Zurich, Switzerland.,Animal Husbandry and Ethology, Albrecht Daniel Thaer-Institut, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Philippstrasse 13, 10115, Berlin, Germany
| | - Elodie F Briefer
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092, Zurich, Switzerland. .,Centre for Proper Housing of Ruminants and Pigs, Federal Food Safety and Veterinary Office, Agroscope, Tänikon, 8356, Ettenhausen, Switzerland. .,Department of Biology, Behavioral Ecology Group, Section for Ecology & Evolution, University of Copenhagen, 2100, Copenhagen Ø, Denmark.
| |
Collapse
|
10
|
Effect of pitch range on dogs' response to conspecific vs. heterospecific distress cries. Sci Rep 2021; 11:19723. [PMID: 34611191 PMCID: PMC8492669 DOI: 10.1038/s41598-021-98967-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 09/02/2021] [Indexed: 11/08/2022] Open
Abstract
Distress cries are emitted by many mammal species to elicit caregiving attention. Across taxa, these calls tend to share similar acoustic structures, but not necessarily frequency range, raising the question of their interspecific communicative potential. As domestic dogs are highly responsive to human emotional cues and experience stress when hearing human cries, we explore whether their responses to distress cries from human infants and puppies depend upon sharing conspecific frequency range or species-specific call characteristics. We recorded adult dogs' responses to distress cries from puppies and human babies, emitted from a loudspeaker in a basket. The frequency of the cries was presented in both their natural range and also shifted to match the other species. Crucially, regardless of species origin, calls falling into the dog call-frequency range elicited more attention. Thus, domestic dogs' responses depended strongly on the frequency range. Females responded both faster and more strongly than males, potentially reflecting asymmetries in parental care investment. Our results suggest that, despite domestication leading to an increased overall responsiveness to human cues, dogs still respond considerably less to calls in the natural human infant range than puppy range. Dogs appear to use a fast but inaccurate decision-making process to determine their response to distress-like vocalisations.
Collapse
|
11
|
Root-Gutteridge H, Brown LP, Forman J, Korzeniowska AT, Simner J, Reby D. Using a new video rating tool to crowd-source analysis of behavioural reaction to stimuli. Anim Cogn 2021; 24:947-956. [PMID: 33751273 PMCID: PMC8360862 DOI: 10.1007/s10071-021-01490-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 02/08/2021] [Accepted: 02/10/2021] [Indexed: 12/31/2022]
Abstract
Quantifying the intensity of animals' reaction to stimuli is notoriously difficult as classic unidimensional measures of responses such as latency or duration of looking can fail to capture the overall strength of behavioural responses. More holistic rating can be useful but have the inherent risks of subjective bias and lack of repeatability. Here, we explored whether crowdsourcing could be used to efficiently and reliably overcome these potential flaws. A total of 396 participants watched online videos of dogs reacting to auditory stimuli and provided 23,248 ratings of the strength of the dogs' responses from zero (default) to 100 using an online survey form. We found that raters achieved very high inter-rater reliability across multiple datasets (although their responses were affected by their sex, age, and attitude towards animals) and that as few as 10 raters could be used to achieve a reliable result. A linear mixed model applied to PCA components of behaviours discovered that the dogs' facial expressions and head orientation influenced the strength of behaviour ratings the most. Further linear mixed models showed that that strength of behaviour ratings was moderately correlated to the duration of dogs' reactions but not to dogs' reaction latency (from the stimulus onset). This suggests that observers' ratings captured consistent dimensions of animals' responses that are not fully represented by more classic unidimensional metrics. Finally, we report that overall participants strongly enjoyed the experience. Thus, we suggest that using crowdsourcing can offer a useful, repeatable tool to assess behavioural intensity in experimental or observational studies where unidimensional coding may miss nuance, or where coding multiple dimensions may be too time-consuming.
Collapse
Affiliation(s)
- Holly Root-Gutteridge
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, BN1 9RH, UK.
- School of Life Sciences, Joseph Banks Laboratories, University of Lincoln, Beevor Street, Lincoln, LN6 7DL, UK.
| | - Louise P Brown
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, BN1 9RH, UK
| | - Jemma Forman
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, BN1 9RH, UK
| | - Anna T Korzeniowska
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, BN1 9RH, UK
| | - Julia Simner
- MULTISENSE Lab, School of Psychology, University of Sussex, Brighton, BN1 9RH, UK
| | - David Reby
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, BN1 9RH, UK
- Equipe Neuro-Ethologie Sensorielle, ENES, CRNL, CNRS UMR5292, INSERM UMR_S 1028, University of Lyon, Saint-Etienne, France
| |
Collapse
|
12
|
Podlipniak P. The Role of Canalization and Plasticity in the Evolution of Musical Creativity. Front Neurosci 2021; 15:607887. [PMID: 33796005 PMCID: PMC8007929 DOI: 10.3389/fnins.2021.607887] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 02/24/2021] [Indexed: 11/29/2022] Open
Abstract
Creativity is defined as the ability to generate something new and valuable. From a biological point of view this can be seen as an adaptation in response to environmental challenges. Although music is such a diverse phenomenon, all people possess a set of abilities that are claimed to be the products of biological evolution, which allow us to produce and listen to music according to both universal and culture-specific rules. On the one hand, musical creativity is restricted by the tacit rules that reflect the developmental interplay between genetic, epigenetic and cultural information. On the other hand, musical innovations seem to be desirable elements present in every musical culture which suggests some biological importance. If our musical activity is driven by biological needs, then it is important for us to understand the function of musical creativity in satisfying those needs, and also how human beings have become so creative in the domain of music. The aim of this paper is to propose that musical creativity has become an indispensable part of the gene-culture coevolution of our musicality. It is suggested that the two main forces of canalization and plasticity have been crucial in this process. Canalization is an evolutionary process in which phenotypes take relatively constant forms regardless of environmental and genetic perturbations. Plasticity is defined as the ability of a phenotype to generate an adaptive response to environmental challenges. It is proposed that human musicality is composed of evolutionary innovations generated by the gradual canalization of developmental pathways leading to musical behavior. Within this process, the unstable cultural environment serves as the selective pressure for musical creativity. It is hypothesized that the connections between cortical and subcortical areas, which constitute cortico-subcortical circuits involved in music processing, are the products of canalization, whereas plasticity is achieved by the means of neurological variability. This variability is present both at the level of an individual structure's enlargement in response to practicing (e.g., the planum temporale) and within the involvement of neurological structures that are not music-specific (e.g., the default mode network) in music processing.
Collapse
Affiliation(s)
- Piotr Podlipniak
- Department of Musicology, Adam Mickiewicz University in Poznań, Poznań, Poland
| |
Collapse
|
13
|
Prato-Previde E, Cannas S, Palestrini C, Ingraffia S, Battini M, Ludovico LA, Ntalampiras S, Presti G, Mattiello S. What's in a Meow? A Study on Human Classification and Interpretation of Domestic Cat Vocalizations. Animals (Basel) 2020; 10:ani10122390. [PMID: 33327613 PMCID: PMC7765146 DOI: 10.3390/ani10122390] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 11/18/2020] [Accepted: 12/07/2020] [Indexed: 01/09/2023] Open
Abstract
Simple Summary Cat–human communication is a core aspect of cat–human relationships and has an impact on domestic cats’ welfare. Meows are the most common human-directed vocalizations and are used in different everyday contexts to convey emotional states. This work investigates adult humans’ capacity to recognize meows emitted by cats during waiting for food, isolation, and brushing. We also assessed whether participants’ gender and level of empathy toward animals in general, and toward cats in particular, positively affect the recognition of cat meows. Participants were asked to complete an online questionnaire designed to assess their knowledge of cats and to evaluate their empathy toward animals. In addition, they listened to cat meows recorded in different situations and tried to identify the context in which they were emitted and their emotional valence. Overall, we found that, although meowing is mainly a human-directed vocalization and should represent a useful tool for cats to communicate emotional states to their owners, humans are not good at extracting precise information from cats’ vocalizations and show a limited capacity of discrimination based mainly on their experience with cats and influenced by gender and empathy toward them. Abstract Although the domestic cat (Felis catus) is probably the most widespread companion animal in the world and interacts in a complex and multifaceted way with humans, the human–cat relationship and reciprocal communication have received far less attention compared, for example, to the human–dog relationship. Only a limited number of studies have considered what people understand of cats’ human-directed vocal signals during daily cat–owner interactions. The aim of the current study was to investigate to what extent adult humans recognize cat vocalizations, namely meows, emitted in three different contexts: waiting for food, isolation, and brushing. A second aim was to evaluate whether the level of human empathy toward animals and cats and the participant’s gender would positively influence the recognition of cat vocalizations. Finally, some insights on which acoustic features are relevant for the main investigation are provided as a serendipitous result. Two hundred twenty-five adult participants were asked to complete an online questionnaire designed to assess their knowledge of cats and to evaluate their empathy toward animals (Animal Empathy Scale). In addition, participants had to listen to six cat meows recorded in three different contexts and specify the context in which they were emitted and their emotional valence. Less than half of the participants were able to associate cats’ vocalizations with the correct context in which they were emitted; the best recognized meow was that emitted while waiting for food. Female participants and cat owners showed a higher ability to correctly classify the vocalizations emitted by cats during brushing and isolation. A high level of empathy toward cats was significantly associated with a better recognition of meows emitted during isolation. Regarding the emotional valence of meows, it emerged that cat vocalizations emitted during isolation are perceived by people as the most negative, whereas those emitted during brushing are perceived as most positive. Overall, it emerged that, although meowing is mainly a human-directed vocalization and in principle represents a useful tool for cats to communicate emotional states to their owners, humans are not particularly able to extract precise information from cats’ vocalizations and show a limited capacity of discrimination based mainly on their experience with cats and influenced by empathy toward them.
Collapse
Affiliation(s)
- Emanuela Prato-Previde
- Department of Pathophysiology and Transplantation, University of Milan, 20133 Milan, Italy
- Correspondence:
| | - Simona Cannas
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy; (S.C.); (C.P.); (S.I.)
| | - Clara Palestrini
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy; (S.C.); (C.P.); (S.I.)
| | - Sara Ingraffia
- Department of Veterinary Medicine, University of Milan, 20133 Milan, Italy; (S.C.); (C.P.); (S.I.)
| | - Monica Battini
- Department of Agricultural and Environmental Science, University of Milan, 20133 Milan, Italy; (M.B.); (S.M.)
| | - Luca Andrea Ludovico
- Department of Computer Science, University of Milan, 20133 Milan, Italy; (L.A.L.); (S.N.); (G.P.)
| | - Stavros Ntalampiras
- Department of Computer Science, University of Milan, 20133 Milan, Italy; (L.A.L.); (S.N.); (G.P.)
| | - Giorgio Presti
- Department of Computer Science, University of Milan, 20133 Milan, Italy; (L.A.L.); (S.N.); (G.P.)
| | - Silvana Mattiello
- Department of Agricultural and Environmental Science, University of Milan, 20133 Milan, Italy; (M.B.); (S.M.)
| |
Collapse
|
14
|
Donnier S, Kovács G, Oña LS, Bräuer J, Amici F. Experience has a limited effect on humans' ability to predict the outcome of social interactions in children, dogs and macaques. Sci Rep 2020; 10:21240. [PMID: 33277580 PMCID: PMC7718882 DOI: 10.1038/s41598-020-78275-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 11/23/2020] [Indexed: 11/08/2022] Open
Abstract
The ability to predict others' behaviour represents a crucial mechanism which allows individuals to react faster and more appropriately. To date, several studies have investigated humans' ability to predict conspecifics' behaviour, but little is known on our ability to predict behaviour in other species. Here, we aimed to test humans' ability to predict social behaviour in dogs, macaques and humans, and assess the role played by experience and evolution on the emergence of this ability. For this purpose, we presented participants with short videoclips of real-life social interactions in dog, child and macaque dyads, and then asked them to predict the outcome of the observed interactions (i.e. aggressive, neutral or playful). Participants were selected according to their previous species-specific experience with dogs, children and non-human primates. Our results showed a limited effect of experience on the ability to predict the outcome of social interactions, which was mainly restricted to macaques. Moreover, we found no support to the co-domestication hypothesis, in that participants were not especially skilled at predicting dog behaviour. Finally, aggressive outcomes in dogs were predicted significantly worse than playful or neutral ones. Based on our findings, we suggest possible lines for future research, like the inclusion of other primate species and the assessment of cultural factors on the ability to predict behaviour across species.
Collapse
Affiliation(s)
- Sasha Donnier
- Fundació UdG: Innovació I Formació, Universitat de Girona, Carrer Pic de Peguera 11, 17003, Girona, Spain
| | - Gyula Kovács
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany
| | - Linda S Oña
- Max Planck Research Group 'Naturalistic Social Cognition', Max Planck Institute for Human Development, Berlin, Germany
| | - Juliane Bräuer
- Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany
- Max-Planck-Institute for the Science of Human History, Jena, Germany
| | - Federica Amici
- Department of Human Behavior, Ecology and Culture, Research Group "Primate Behavioural Ecology", Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany.
- Institute of Biology, Behavioral Ecology Research Group, University of Leipzig Faculty of Life Science, Leipzig, Germany.
| |
Collapse
|
15
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
16
|
Kamiloğlu RG, Slocombe KE, Haun DBM, Sauter DA. Human listeners' perception of behavioural context and core affect dimensions in chimpanzee vocalizations. Proc Biol Sci 2020; 287:20201148. [PMID: 32546102 PMCID: PMC7329049 DOI: 10.1098/rspb.2020.1148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.
Collapse
Affiliation(s)
- Roza G Kamiloğlu
- Department of Psychology, University of Amsterdam, REC G, Nieuwe Achtergracht 129B, 1001 NK, Amsterdam, The Netherlands
| | | | - Daniel B M Haun
- Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Disa A Sauter
- Department of Psychology, University of Amsterdam, REC G, Nieuwe Achtergracht 129B, 1001 NK, Amsterdam, The Netherlands
| |
Collapse
|
17
|
Amici F, Waterman J, Kellermann CM, Karimullah K, Bräuer J. The ability to recognize dog emotions depends on the cultural milieu in which we grow up. Sci Rep 2019; 9:16414. [PMID: 31712680 PMCID: PMC6848084 DOI: 10.1038/s41598-019-52938-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Accepted: 10/26/2019] [Indexed: 11/09/2022] Open
Abstract
Inter-specific emotion recognition is especially adaptive when species spend a long time in close association, like dogs and humans. Here, we comprehensively studied the human ability to recognize facial expressions associated with dog emotions (hereafter, emotions). Participants were presented with pictures of dogs, humans and chimpanzees, showing angry, fearful, happy, neutral and sad emotions, and had to assess which emotion was shown, and the context in which the picture had been taken. Participants were recruited among children and adults with different levels of general experience with dogs, resulting from different personal (i.e. dog ownership) and cultural experiences (i.e. growing up or being exposed to a cultural milieu in which dogs are highly valued and integrated in human lives). Our results showed that some dog emotions such as anger and happiness are recognized from early on, independently of experience. However, the ability to recognize dog emotions is mainly acquired through experience. In adults, the probability of recognizing dog emotions was higher for participants grown up in a cultural milieu with a positive attitude toward dogs, which may result in different passive exposure, interest or inclination toward this species.
Collapse
Affiliation(s)
- Federica Amici
- Research Group "Primate Behavioural Ecology", Department of Human Behavior, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany. .,Behavioral Ecology Research Group, Institute of Biology, Faculty of Life Science, University of Leipzig, Leipzig, Germany. .,Leipzig Research Center for Early Child Development, University of Leipzig, Leipzig, Germany.
| | - James Waterman
- School of Psychology, University of Lincoln, Lincoln, UK
| | - Christina Maria Kellermann
- Leipzig Research Center for Early Child Development, University of Leipzig, Leipzig, Germany.,Faculty of Social and Behavioral Sciences, Friedrich Schiller University, Jena, Germany
| | - Karimullah Karimullah
- Behavioral Ecology Research Group, Institute of Biology, Faculty of Life Science, University of Leipzig, Leipzig, Germany
| | - Juliane Bräuer
- Department of Linguistic and Cultural Evolution, Max Planck Institute for the Science of Human History, Jena, Germany.,Friedrich Schiller University, Department of General Psychology and Cognitive Neuroscience, Jena, Germany
| |
Collapse
|
18
|
Sorcinelli A, Vouloumanos A. Is Visual Perceptual Narrowing an Obligatory Developmental Process? Front Psychol 2018; 9:2326. [PMID: 30532728 PMCID: PMC6265369 DOI: 10.3389/fpsyg.2018.02326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 11/06/2018] [Indexed: 11/15/2022] Open
Abstract
Perceptual narrowing, or a diminished perceptual sensitivity to infrequently encountered stimuli, sometimes accompanied by an increased sensitivity to frequently encountered stimuli, has been observed in unimodal speech and visual perception, as well as in multimodal perception, leading to the suggestion that it is a fundamental feature of perceptual development. However, recent findings in unimodal face perception suggest that perceptual abilities are flexible in development. Similarly, in multimodal perception, new paradigms examining temporal dynamics, rather than standard overall looking time, also suggest that perceptual narrowing might not be obligatory. Across two experiments, we assess perceptual narrowing in unimodal visual perception using remote eye-tracking. We compare adults’ looking at human faces and monkey faces of different species, and present analyses of standard overall looking time and temporal dynamics. As expected, adults discriminated between different human faces, but, unlike previous studies, they also discriminated between different monkey faces. Temporal dynamics revealed that adults more readily discriminated human compared to monkey faces, suggesting a processing advantage for conspecifics compared to other animals. Adults’ success in discriminating between faces of two unfamiliar monkey species calls into question whether perceptual narrowing is an obligatory developmental process. Humans undoubtedly diminish in their ability to perceive distinctions between infrequently encountered stimuli as compared to frequently encountered stimuli, however, consistent with recent findings, this narrowing should be conceptualized as a refinement and not as a loss of abilities. Perceptual abilities for infrequently encountered stimuli may be detectable, though weaker compared to adults’ perception of frequently encountered stimuli. Consistent with several other accounts we suggest that perceptual development must be more flexible than a perceptual narrowing account posits.
Collapse
|
19
|
Filippi P, Congdon JV, Hoang J, Bowling DL, Reber SA, Pašukonis A, Hoeschele M, Ocklenburg S, de Boer B, Sturdy CB, Newen A, Güntürkün O. Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: evidence for acoustic universals. Proc Biol Sci 2018; 284:rspb.2017.0990. [PMID: 28747478 DOI: 10.1098/rspb.2017.0990] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 06/21/2017] [Indexed: 12/28/2022] Open
Abstract
Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes-Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
Collapse
Affiliation(s)
- Piera Filippi
- Artificial Intelligence Laboratory, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium .,Center for Mind, Brain and Cognitive Evolution, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany.,Brain and Language Research Institute, Aix-Marseille University, Avenue Pasteur 5, 13604 Aix-en-Provence, France.,Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen, The Netherlands
| | - Jenna V Congdon
- Department of Psychology, University of Alberta, P217 Biological Sciences Building, Edmonton, Alberta, Canada T6G 2E9
| | - John Hoang
- Department of Psychology, University of Alberta, P217 Biological Sciences Building, Edmonton, Alberta, Canada T6G 2E9
| | - Daniel L Bowling
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - Stephan A Reber
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - Andrius Pašukonis
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - Marisa Hoeschele
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, 1090 Vienna, Austria
| | - Sebastian Ocklenburg
- Department of Biopsychology, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany
| | - Bart de Boer
- Artificial Intelligence Laboratory, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
| | - Christopher B Sturdy
- Department of Psychology, University of Alberta, P217 Biological Sciences Building, Edmonton, Alberta, Canada T6G 2E9.,Neuroscience and Mental Health Institute, University of Alberta, 4-120 Katz Group Center, Edmonton, Alberta, Canada T6G 2E1
| | - Albert Newen
- Center for Mind, Brain and Cognitive Evolution, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany.,Department of Philosophy II, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany
| | - Onur Güntürkün
- Center for Mind, Brain and Cognitive Evolution, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany.,Department of Biopsychology, Ruhr-Universität Bochum, Universitätsstr. 150, 44801 Bochum, Germany
| |
Collapse
|
20
|
Ben-Aderet T, Gallego-Abenza M, Reby D, Mathevon N. Dog-directed speech: why do we use it and do dogs pay attention to it? Proc Biol Sci 2018; 284:rspb.2016.2429. [PMID: 28077769 DOI: 10.1098/rspb.2016.2429] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Accepted: 12/02/2016] [Indexed: 11/12/2022] Open
Abstract
Pet-directed speech is strikingly similar to infant-directed speech, a peculiar speaking pattern with higher pitch and slower tempo known to engage infants' attention and promote language learning. Here, we report the first investigation of potential factors modulating the use of dog-directed speech, as well as its immediate impact on dogs' behaviour. We recorded adult participants speaking in front of pictures of puppies, adult and old dogs, and analysed the quality of their speech. We then performed playback experiments to assess dogs' reaction to dog-directed speech compared with normal speech. We found that human speakers used dog-directed speech with dogs of all ages and that the acoustic structure of dog-directed speech was mostly independent of dog age, except for sound pitch which was relatively higher when communicating with puppies. Playback demonstrated that, in the absence of other non-auditory cues, puppies were highly reactive to dog-directed speech, and that the pitch was a key factor modulating their behaviour, suggesting that this specific speech register has a functional value in young dogs. Conversely, older dogs did not react differentially to dog-directed speech compared with normal speech. The fact that speakers continue to use dog-directed with older dogs therefore suggests that this speech pattern may mainly be a spontaneous attempt to facilitate interactions with non-verbal listeners.
Collapse
Affiliation(s)
- Tobey Ben-Aderet
- Department of Psychology, City University of New York, Hunter College, New York, NY, USA
| | - Mario Gallego-Abenza
- Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR9197, University of Lyon/Saint-Etienne, Saint-Etienne, France
| | - David Reby
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK
| | - Nicolas Mathevon
- Department of Psychology, City University of New York, Hunter College, New York, NY, USA .,Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR9197, University of Lyon/Saint-Etienne, Saint-Etienne, France
| |
Collapse
|
21
|
Scheumann M, Hasting AS, Zimmermann E, Kotz SA. Human Novelty Response to Emotional Animal Vocalizations: Effects of Phylogeny and Familiarity. Front Behav Neurosci 2017; 11:204. [PMID: 29114210 PMCID: PMC5660701 DOI: 10.3389/fnbeh.2017.00204] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2017] [Accepted: 10/06/2017] [Indexed: 11/13/2022] Open
Abstract
Darwin (1872) postulated that emotional expressions contain universals that are retained across species. We recently showed that human rating responses were strongly affected by a listener's familiarity with vocalization types, whereas evidence for universal cross-taxa emotion recognition was limited. To disentangle the impact of evolutionarily retained mechanisms (phylogeny) and experience-driven cognitive processes (familiarity), we compared the temporal unfolding of event-related potentials (ERPs) in response to agonistic and affiliative vocalizations expressed by humans and three animal species. Using an auditory oddball novelty paradigm, ERPs were recorded in response to task-irrelevant novel sounds, comprising vocalizations varying in their degree of phylogenetic relationship and familiarity to humans. Vocalizations were recorded in affiliative and agonistic contexts. Offline, participants rated the vocalizations for valence, arousal, and familiarity. Correlation analyses revealed a significant correlation between a posteriorly distributed early negativity and arousal ratings. More specifically, a contextual category effect of this negativity was observed for human infant and chimpanzee vocalizations but absent for other species vocalizations. Further, a significant correlation between the later and more posteriorly P3a and P3b responses and familiarity ratings indicates a link between familiarity and attentional processing. A contextual category effect of the P3b was observed for the less familiar chimpanzee and tree shrew vocalizations. Taken together, these findings suggest that early negative ERP responses to agonistic and affiliative vocalizations may be influenced by evolutionary retained mechanisms, whereas the later orienting of attention (positive ERPs) may mainly be modulated by the prior experience.
Collapse
Affiliation(s)
- Marina Scheumann
- Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Anna S. Hasting
- Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Day Clinic for Cognitive Neurology, University Hospital Leipzig, Leipzig, Germany
| | - Elke Zimmermann
- Institute of Zoology, University of Veterinary Medicine Hannover, Hannover, Germany
| | - Sonja A. Kotz
- Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
22
|
Anikin A, Bååth R, Persson T. Human Non-linguistic Vocal Repertoire: Call Types and Their Meaning. JOURNAL OF NONVERBAL BEHAVIOR 2017; 42:53-80. [PMID: 29497221 PMCID: PMC5816134 DOI: 10.1007/s10919-017-0267-y] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller’s emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former’s greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Department of Philosophy, Lund University, Box 192, 221 00 Lund, Sweden
| | - Rasmus Bååth
- Division of Cognitive Science, Department of Philosophy, Lund University, Box 192, 221 00 Lund, Sweden
| | - Tomas Persson
- Division of Cognitive Science, Department of Philosophy, Lund University, Box 192, 221 00 Lund, Sweden
| |
Collapse
|
23
|
Filippi P, Gogoleva SS, Volodina EV, Volodin IA, de Boer B. Humans identify negative (but not positive) arousal in silver fox vocalizations: implications for the adaptive value of interspecific eavesdropping. Curr Zool 2017; 63:445-456. [PMID: 29492004 PMCID: PMC5804197 DOI: 10.1093/cz/zox035] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Accepted: 05/12/2017] [Indexed: 11/14/2022] Open
Abstract
The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans’ ability to identify emotional arousal in silver foxes. Here, we adopted low- and high-arousal calls emitted by three strains of silver fox—Tame, Aggressive, and Unselected—in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans’ ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans’ absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.
Collapse
Affiliation(s)
- Piera Filippi
- Artificial Intelligence Laboratory, Department of Computer Science, Faculty of Science, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium.,Brain and Language Research Institute, Aix-Marseille University, Avenue Pasteur 5, 13604 Aix-en-Provence, France.,Max Planck Institute for Psycholinguistics, Department of Language and Cognition, Wundtlaan 1, 6525 XD, Nijmegen, The Netherlands
| | - Svetlana S Gogoleva
- Department of Vertebrate Zoology, Faculty of Biology, Lomonosov Moscow State University, Vorobievy Gory 1/12, 119991 Moscow, Russia
| | - Elena V Volodina
- Scientific Research Department, Moscow Zoo, B. Gruzinskaya 1, 123242 Moscow, Russia
| | - Ilya A Volodin
- Department of Vertebrate Zoology, Faculty of Biology, Lomonosov Moscow State University, Vorobievy Gory 1/12, 119991 Moscow, Russia.,Scientific Research Department, Moscow Zoo, B. Gruzinskaya 1, 123242 Moscow, Russia
| | - Bart de Boer
- Artificial Intelligence Laboratory, Department of Computer Science, Faculty of Science, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
| |
Collapse
|
24
|
Faragó T, Takács N, Miklósi Á, Pongrácz P. Dog growls express various contextual and affective content for human listeners. ROYAL SOCIETY OPEN SCIENCE 2017; 4:170134. [PMID: 28573021 PMCID: PMC5451822 DOI: 10.1098/rsos.170134] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Accepted: 04/11/2017] [Indexed: 06/07/2023]
Abstract
Vocal expressions of emotions follow simple rules to encode the inner state of the caller into acoustic parameters, not just within species, but also in cross-species communication. Humans use these structural rules to attribute emotions to dog vocalizations, especially to barks, which match with their contexts. In contrast, humans were found to be unable to differentiate between playful and threatening growls, probably because single growls' aggression level was assessed based on acoustic size cues. To resolve this contradiction, we played back natural growl bouts from three social contexts (food guarding, threatening and playing) to humans, who had to rate the emotional load and guess the context of the playbacks. Listeners attributed emotions to growls according to their social contexts. Within threatening and playful contexts, bouts with shorter, slower pulsing growls and showing smaller apparent body size were rated to be less aggressive and fearful, but more playful and happy. Participants associated the correct contexts with the growls above chance. Moreover, women and participants experienced with dogs scored higher in this task. Our results indicate that dogs may communicate honestly their size and inner state in a serious contest situation, while manipulatively in more uncertain defensive and playful contexts.
Collapse
Affiliation(s)
- T. Faragó
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| | - N. Takács
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| | - Á. Miklósi
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
- MTA-ELTE Comparative Ethology Research Group, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| | - P. Pongrácz
- Department of Ethology, Biology Institute, Eötvös Loránd University, Pázmány Péter stny. 1/C, Budapest, H-1117Hungary
| |
Collapse
|
25
|
Prosody Predicts Contest Outcome in Non-Verbal Dialogs. PLoS One 2016; 11:e0166953. [PMID: 27907039 PMCID: PMC5132166 DOI: 10.1371/journal.pone.0166953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Accepted: 11/07/2016] [Indexed: 11/24/2022] Open
Abstract
Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.
Collapse
|
26
|
Konerding WS, Zimmermann E, Bleich E, Hedrich HJ, Scheumann M. Female cats, but not males, adjust responsiveness to arousal in the voice of kittens. BMC Evol Biol 2016; 16:157. [PMID: 27514377 PMCID: PMC4982004 DOI: 10.1186/s12862-016-0718-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2016] [Accepted: 07/04/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The infant cry is the most important communicative tool to elicit adaptive parental behaviour. Sex-specific adaptation, linked to parental investment, may have evolutionary shaped the responsiveness to changes in the voice of the infant cries. The emotional content of infant cries may trigger distinctive responsiveness either based on their general arousing properties, being part of a general affect encoding rule, or based on affective perception, linked to parental investment, differing between species. To address this question, we performed playback experiments using infant isolation calls in a species without paternal care, the domestic cat. We used kitten calls recorded in isolation contexts inducing either Low arousal (i.e., isolation only) or High arousal (i.e., additional handling), leading to respective differences in escape response of the kittens. We predicted that only females respond differently to playbacks of Low versus High arousal kitten isolation calls, based on sex-differences in parental investment. RESULTS Findings showed sex-specific responsiveness of adult cats listening to kitten isolation calls of different arousal conditions, with only females responding faster towards calls of the High versus the Low arousal condition. Breeding experience of females did not affect the result. Furthermore, female responsiveness correlated with acoustic parameters related to spectral characteristics of the fundamental frequency (F0): Females responded faster to kitten calls with lower F0 at call onset, lower minimum F0 and a steeper slope of the F0. CONCLUSIONS Our study revealed sex-specific differences in the responsiveness to kitten isolation calls of different arousal conditions independent of female breeding experience. The findings indicated that features of F0 are important to convey the arousal state of an infant. Taken together, the results suggest that differences in parental investment evolutionary shaped responsiveness (auditory sensitivity/ motivation) to infant calls in a sex-specific manner in the domestic cat.
Collapse
Affiliation(s)
- Wiebke S Konerding
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Stadtfelddamm 34, 30625, Hannover, Germany.,Institute of Zoology, University of Veterinary Medicine Hannover, Bünteweg 17, 30559, Hannover, Germany
| | - Elke Zimmermann
- Institute of Zoology, University of Veterinary Medicine Hannover, Bünteweg 17, 30559, Hannover, Germany
| | - Eva Bleich
- Institute for Laboratory Animal Science and Central Animal Facility, Hannover Medical School, Carl-Neuberg-Straße 1, 30625, Hannover, Germany
| | - Hans-Jürgen Hedrich
- Institute for Laboratory Animal Science and Central Animal Facility, Hannover Medical School, Carl-Neuberg-Straße 1, 30625, Hannover, Germany
| | - Marina Scheumann
- Institute of Zoology, University of Veterinary Medicine Hannover, Bünteweg 17, 30559, Hannover, Germany.
| |
Collapse
|
27
|
De Dreu CKW, Kret ME, Sauter DA. Assessing Emotional Vocalizations From Cultural In-Group and Out-Group Depends on Oxytocin. SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE 2016. [DOI: 10.1177/1948550616657596] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Group-living animals, humans included, produce vocalizations like screams, growls, laughs, and victory calls. Accurately decoding such emotional vocalizations serves both individual and group functioning, suggesting that (i) vocalizations from in-group members may be privileged, in terms of speed and accuracy of processing, and (ii) such processing may depend on evolutionary ancient neural circuitries that sustain and enable cooperation with and protection of the in-group against outside threat. Here, we examined this possibility and focused on the neuropeptide oxytocin. Dutch participants self-administered oxytocin or placebo (double-blind, placebo-controlled study design) and responded to emotional vocalizations produced by cultural in-group members (Native Dutch) and cultural out-group members (Namibian Himba). In-group vocalizations were recognized faster and more accurately than out-group vocalizations, and oxytocin enhanced accurate decoding of specific vocalizations from one’s cultural out-group—triumph and anger. We discuss possible explanations and suggest avenues for new research.
Collapse
Affiliation(s)
- Carsten K. W. De Dreu
- Institute of Psychology, Leiden University, Leiden, the Netherlands
- Center for Experimental Economics and Political Decision Making (CREED), University of Amsterdam, the Netherlands
| | - Mariska E. Kret
- Institute of Psychology, Leiden University, Leiden, the Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
28
|
|
29
|
Snowdon CT, Zimmermann E, Altenmüller E. Music evolution and neuroscience. PROGRESS IN BRAIN RESEARCH 2015; 217:17-34. [DOI: 10.1016/bs.pbr.2014.11.019] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|