1
|
Kamiloğlu RG, Sauter DA. Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations. Cogn Emot 2024; 38:277-295. [PMID: 37997898 PMCID: PMC11057848 DOI: 10.1080/02699931.2023.2285854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Collapse
Affiliation(s)
- Roza G. Kamiloğlu
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
2
|
Kreiman J. Information conveyed by voice qualitya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1264-1271. [PMID: 38345424 DOI: 10.1121/10.0024609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024]
Abstract
The problem of characterizing voice quality has long caused debate and frustration. The richness of the available descriptive vocabulary is overwhelming, but the density and complexity of the information voices convey lead some to conclude that language can never adequately specify what we hear. Others argue that terminology lacks an empirical basis, so that language-based scales are inadequate a priori. Efforts to provide meaningful instrumental characterizations have also had limited success. Such measures may capture sound patterns but cannot at present explain what characteristics, intentions, or identity listeners attribute to the speaker based on those patterns. However, some terms continually reappear across studies. These terms align with acoustic dimensions accounting for variance across speakers and languages and correlate with size and arousal across species. This suggests that labels for quality rest on a bedrock of biology: We have evolved to perceive voices in terms of size/arousal, and these factors structure both voice acoustics and descriptive language. Such linkages could help integrate studies of signals and their meaning, producing a truly interdisciplinary approach to the study of voice.
Collapse
Affiliation(s)
- Jody Kreiman
- Departments of Head and Neck Surgery and Linguistics, University of California, Los Angeles, Los Angeles, California 90095-1794, USA
| |
Collapse
|
3
|
McGrath N, Phillips CJC, Burman OHP, Dwyer CM, Henning J. Humans can identify reward-related call types of chickens. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231284. [PMID: 38179075 PMCID: PMC10762433 DOI: 10.1098/rsos.231284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 11/13/2023] [Indexed: 01/06/2024]
Abstract
Humans can decode emotional information from vocalizations of animals. However, little is known if these interpretations relate to the ability of humans to identify if calls were made in a rewarded or non-rewarded context. We tested whether humans could identify calls made by chickens (Gallus gallus) in these contexts, and whether demographic factors or experience with chickens affected their correct identification context and the ratings of perceived positive and negative emotions (valence) and excitement (arousal) of chickens. Participants (n = 194) listened to eight calls when chickens were anticipating a reward, and eight calls in non-rewarded contexts, and indicated whether the vocalizing chicken was experiencing pleasure/displeasure, and high/low excitement, using visual analogue scales. Sixty-nine per cent of participants correctly assigned reward and non-reward calls to their respective categories. Participants performed better at categorizing reward-related calls, with 71% of reward calls classified correctly, compared with 67% of non-reward calls. Older people were less accurate in context identification. Older people's ratings of the excitement or arousal levels of reward-related calls were higher than younger people's ratings, while older people rated non-reward calls as representing higher positive emotions or pleasure (higher valence) compared to ratings made by younger people. Our study strengthens evidence that humans perceive emotions across different taxa, and that specific acoustic cues may embody a homologous signalling system among vertebrates. Importantly, humans could identify reward-related calls, and this ability could enhance the management of farmed chickens to improve their welfare.
Collapse
Affiliation(s)
- Nicky McGrath
- School of Veterinary Sciences, University of Queensland, Gatton, Queensland 4343, Australia
| | - Clive J. C. Phillips
- Institute of Veterinary Medicine and Animal Science, Estonia University of Life Sciences, Tartu, Estonia
- Curtin University Sustainable Policy (CUSP) Institute, Kent Street, Bentley, Western Australia 6102, Australia
| | - Oliver H. P. Burman
- School of Life Sciences, University of Lincoln, Brayford Pool, Lincoln, Lincolnshire LN6 7TS, UK
| | - Cathy M. Dwyer
- Scotland's Rural College (SRUC), Peter Wilson Building, Kings Buildings, West Mains Road, Edinburgh EH9 3JG, UK
| | - Joerg Henning
- School of Veterinary Sciences, University of Queensland, Gatton, Queensland 4343, Australia
| |
Collapse
|
4
|
Bowling DL. Biological principles for music and mental health. Transl Psychiatry 2023; 13:374. [PMID: 38049408 PMCID: PMC10695969 DOI: 10.1038/s41398-023-02671-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 10/30/2023] [Accepted: 11/17/2023] [Indexed: 12/06/2023] Open
Abstract
Efforts to integrate music into healthcare systems and wellness practices are accelerating but the biological foundations supporting these initiatives remain underappreciated. As a result, music-based interventions are often sidelined in medicine. Here, I bring together advances in music research from neuroscience, psychology, and psychiatry to bridge music's specific foundations in human biology with its specific therapeutic applications. The framework I propose organizes the neurophysiological effects of music around four core elements of human musicality: tonality, rhythm, reward, and sociality. For each, I review key concepts, biological bases, and evidence of clinical benefits. Within this framework, I outline a strategy to increase music's impact on health based on standardizing treatments and their alignment with individual differences in responsivity to these musical elements. I propose that an integrated biological understanding of human musicality-describing each element's functional origins, development, phylogeny, and neural bases-is critical to advancing rational applications of music in mental health and wellness.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford University, School of Medicine, Stanford, CA, USA.
- Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, School of Humanities and Sciences, Stanford, CA, USA.
| |
Collapse
|
5
|
Ceravolo L, Debracque C, Pool E, Gruber T, Grandjean D. Frontal mechanisms underlying primate calls recognition by humans. Cereb Cortex Commun 2023; 4:tgad019. [PMID: 38025828 PMCID: PMC10661312 DOI: 10.1093/texcom/tgad019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 10/18/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The ability to process verbal language seems unique to humans and relies not only on semantics but on other forms of communication such as affective vocalizations, that we share with other primate species-particularly great apes (Hominidae). Methods To better understand these processes at the behavioral and brain level, we asked human participants to categorize vocalizations of four primate species including human, great apes (chimpanzee and bonobo), and monkey (rhesus macaque) during MRI acquisition. Results Classification was above chance level for all species but bonobo vocalizations. Imaging analyses were computed using a participant-specific, trial-by-trial fitted probability categorization value in a model-based style of data analysis. Model-based analyses revealed the implication of the bilateral orbitofrontal cortex and inferior frontal gyrus pars triangularis (IFGtri) respectively correlating and anti-correlating with the fitted probability of accurate species classification. Further conjunction analyses revealed enhanced activity in a sub-area of the left IFGtri specifically for the accurate classification of chimpanzee calls compared to human voices. Discussion Our data-that are controlled for acoustic variability between species-therefore reveal distinct frontal mechanisms that shed light on how the human brain evolved to process vocal signals.
Collapse
Affiliation(s)
- Leonardo Ceravolo
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| | - Coralie Debracque
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| | - Eva Pool
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
- E3 Lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
| | - Thibaud Gruber
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
- eccePAN lab, Department of Psychology and Educational Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotions and Affective Dynamics lab, Department of Psychology and Educational Sciences, University of Geneva, Unimail building, Boulevard Pont-d’Arve 40CH-1205, Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech building, Chemin des Mines 9CH-1202, Geneva, Switzerland
| |
Collapse
|
6
|
Thévenet J, Papet L, Coureaud G, Boyer N, Levréro F, Grimault N, Mathevon N. Crocodile perception of distress in hominid baby cries. Proc Biol Sci 2023; 290:20230201. [PMID: 37554035 PMCID: PMC10410202 DOI: 10.1098/rspb.2023.0201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
It is generally argued that distress vocalizations, a common modality for alerting conspecifics across a wide range of terrestrial vertebrates, share acoustic features that allow heterospecific communication. Yet studies suggest that the acoustic traits used to decode distress may vary between species, leading to decoding errors. Here we found through playback experiments that Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and human), and that the intensity of crocodile response depends critically on a set of specific acoustic features (mainly deterministic chaos, harmonicity and spectral prominences). Our results suggest that crocodiles are sensitive to the degree of distress encoded in the vocalizations of phylogenetically very distant vertebrates. A comparison of these results with those obtained with human subjects confronted with the same stimuli further indicates that crocodiles and humans use different acoustic criteria to assess the distress encoded in infant cries. Interestingly, the acoustic features driving crocodile reaction are likely to be more reliable markers of distress than those used by humans. These results highlight that the acoustic features encoding information in vertebrate sound signals are not necessarily identical across species.
Collapse
Affiliation(s)
- Julie Thévenet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Léo Papet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Gérard Coureaud
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Boyer
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Grimault
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Institut universitaire de France, Paris, Île-de-France, France
| |
Collapse
|
7
|
Debracque C, Slocombe KE, Clay Z, Grandjean D, Gruber T. Humans recognize affective cues in primate vocalizations: acoustic and phylogenetic perspectives. Sci Rep 2023; 13:10900. [PMID: 37407601 DOI: 10.1038/s41598-023-37558-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/23/2023] [Indexed: 07/07/2023] Open
Abstract
Humans are adept at extracting affective information from vocalizations of humans and other animals. However, the extent to which human recognition of vocal affective cues of other species is due to cross-taxa similarities in acoustic parameters or the phylogenetic closeness between species is currently unclear. To address this, we first analyzed acoustic variation in 96 affective vocalizations, taken from agonistic and affiliative contexts, of humans and three other primates-rhesus macaques (Macaca mulatta), chimpanzees and bonobos (Pan troglodytes and Pan paniscus). Acoustic analyses revealed that agonistic chimpanzee and bonobo vocalizations were similarly distant from agonistic human voices, but chimpanzee affiliative vocalizations were significantly closer to human affiliative vocalizations, than those of bonobos, indicating a potential derived vocal evolution in the bonobo lineage. Second, we asked 68 human participants to categorize and also discriminate vocalizations based on their presumed affective content. Results showed that participants reliably categorized human and chimpanzee vocalizations according to affective content, but not bonobo threat vocalizations nor any macaque vocalizations. Participants discriminated all species calls above chance level except for threat calls by bonobos and macaques. Our results highlight the importance of both phylogenetic and acoustic parameter level explanations in cross-species affective perception, drawing a more complex picture to the origin of vocal emotions.
Collapse
Affiliation(s)
- C Debracque
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences (CISA), Campus Biotech, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland.
| | - K E Slocombe
- Department of Psychology, University of York, York, UK
| | - Z Clay
- Department of Psychology, Durham University, Durham, UK
| | - D Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences (CISA), Campus Biotech, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland
| | - T Gruber
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences (CISA), Campus Biotech, University of Geneva, Chemin des Mines 9, 1202, Geneva, Switzerland
| |
Collapse
|
8
|
Abu Salih M, Abargil M, Badarneh S, Klein Selle N, Irani M, Atzil S. Evidence for cultural differences in affect during mother-infant interactions. Sci Rep 2023; 13:4831. [PMID: 36964204 PMCID: PMC10039016 DOI: 10.1038/s41598-023-31907-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 03/16/2023] [Indexed: 03/26/2023] Open
Abstract
Maternal care is considered a universal and even cross-species set of typical behaviors, which are necessary to determine the social development of children. In humans, most research on mother-infant bonding is based on Western cultures and conducted in European and American countries. Thus, it is still unknown which aspects of mother-infant behaviors are universal and which vary with culture. Here we test whether typical mother-infant behaviors of affect-communication and affect-regulation are equally represented during spontaneous interaction in Palestinian-Arab and Jewish cultures. 30 Palestinian-Arab and 43 Jewish mother-infant dyads were recruited and videotaped. Using AffectRegulation Coding System (ARCS), we behaviorally analyzed the second-by-second display of valence and arousal in each participant and calculated the dynamic patterns of affect co-regulation. The results show that Palestinian-Arab infants express more positive valence than Jewish infants and that Palestinian-Arab mothers express higher arousal compared to Jewish mothers. Moreover, we found culturally-distinct strategies to regulate the infant: increased arousal in Palestinian-Arab dyads and increased mutual affective match in Jewish dyads. Such cross-cultural differences in affect indicate that basic features of emotion that are often considered universal are differentially represented in different cultures. Affect communication and regulation patterns can be transmitted across generations in early-life socialization with caregivers.
Collapse
Affiliation(s)
- Miada Abu Salih
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | - Maayan Abargil
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | - Saja Badarneh
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | | | - Merav Irani
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | - Shir Atzil
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel.
| |
Collapse
|
9
|
Ntalampiras S, Ludovico LA, Presti G, Vena MV, Fantini D, Ogel T, Celozzi S, Battini M, Mattiello S. An integrated system for the acoustic monitoring of goat farms. ECOL INFORM 2023. [DOI: 10.1016/j.ecoinf.2023.102043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023]
|
10
|
Fukui H, Toyoshima K. Testosterone, oxytocin and co-operation: A hypothesis for the origin and function of music. Front Psychol 2023; 14:1055827. [PMID: 36860786 PMCID: PMC9968751 DOI: 10.3389/fpsyg.2023.1055827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 01/25/2023] [Indexed: 02/15/2023] Open
Abstract
Since the time of Darwin, theories have been proposed on the origin and functions of music; however, the subject remains enigmatic. The literature shows that music is closely related to important human behaviours and abilities, namely, cognition, emotion, reward and sociality (co-operation, entrainment, empathy and altruism). Notably, studies have deduced that these behaviours are closely related to testosterone (T) and oxytocin (OXT). The association of music with important human behaviours and neurochemicals is closely related to the understanding of reproductive and social behaviours being unclear. In this paper, we describe the endocrinological functions of human social and musical behaviour and demonstrate its relationship to T and OXT. We then hypothesised that the emergence of music is associated with behavioural adaptations and emerged as humans socialised to ensure survival. Moreover, the proximal factor in the emergence of music is behavioural control (social tolerance) through the regulation of T and OXT, and the ultimate factor is group survival through co-operation. The "survival value" of music has rarely been approached from the perspective of musical behavioural endocrinology. This paper provides a new perspective on the origin and functions of music.
Collapse
Affiliation(s)
- Hajime Fukui
- Nara University of Education, Nara, Japan,*Correspondence: Hajime Fukui, ✉
| | | |
Collapse
|
11
|
Bálint A, Szabó Á, Andics A, Gácsi M. Dog and human neural sensitivity to voicelikeness: A comparative fMRI study. Neuroimage 2023; 265:119791. [PMID: 36476565 DOI: 10.1016/j.neuroimage.2022.119791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/01/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022] Open
Abstract
Voice-sensitivity in the auditory cortex of a range of mammals has been proposed to be determined primarily by tuning to conspecific auditory stimuli, but recent human findings indicate a role for a more general tuning to voicelikeness. Vocal emotional valence, a central characteristic of vocalisations, has been linked to the same basic acoustic parameters across species. Comparative neuroimaging revealed that during voice perception, such acoustic parameters modulate emotional valence-sensitivity in auditory cortical regions in both family dogs and humans. To explore the role of voicelikeness in auditory emotional valence-sensitivity across species, here we constructed artificial emotional sounds in two sound categories: voice-like vs. sine-wave sounds, parametrically modulating two main acoustic parameters, f0 and call length. We hypothesised that if mammalian auditory systems are characterised by a general tuning to voicelikeness, voice-like sounds will be processed preferentially, and acoustic parameters for voice-like sounds will be processed differently than for sine-wave sounds - both in dogs and humans. We found cortical areas in both species that responded stronger to voice-like than to sine-wave stimuli, while there were no regions responding stronger to sine-wave sounds in either species. Additionally, we found that in bilateral primary and emotional valence-sensitive auditory regions of both species, the processing of voice-like and sine-wave sounds are modulated by f0 in opposite ways. These results reveal functional similarities between evolutionarily distant mammals for processing voicelikeness and its effect on processing basic acoustic cues of vocal emotions.
Collapse
Affiliation(s)
- Anna Bálint
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary.
| | - Ádám Szabó
- Department of Neuroradiology at the Medical Imaging Centre of the Semmelweis University, H-1082 Budapest, Üllői út 78a, Hungary
| | - Attila Andics
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Hungarian Academy of Sciences - Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; ELTE NAP Canine Brain Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Márta Gácsi
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| |
Collapse
|
12
|
Lessons learned in animal acoustic cognition through comparisons with humans. Anim Cogn 2023; 26:97-116. [PMID: 36574158 PMCID: PMC9877085 DOI: 10.1007/s10071-022-01735-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/21/2022] [Accepted: 12/06/2022] [Indexed: 12/28/2022]
Abstract
Humans are an interesting subject of study in comparative cognition. While humans have a lot of anecdotal and subjective knowledge about their own minds and behaviors, researchers tend not to study humans the way they study other species. Instead, comparisons between humans and other animals tend to be based on either assumptions about human behavior and cognition, or very different testing methods. Here we emphasize the importance of using insider knowledge about humans to form interesting research questions about animal cognition while simultaneously stepping back and treating humans like just another species as if one were an alien researcher. This perspective is extremely helpful to identify what aspects of cognitive processes may be interesting and relevant across the animal kingdom. Here we outline some examples of how this objective human-centric approach has helped us to move forward knowledge in several areas of animal acoustic cognition (rhythm, harmonicity, and vocal units). We describe how this approach works, what kind of benefits we obtain, and how it can be applied to other areas of animal cognition. While an objective human-centric approach is not useful when studying traits that do not occur in humans (e.g., magnetic spatial navigation), it can be extremely helpful when studying traits that are relevant to humans (e.g., communication). Overall, we hope to entice more people working in animal cognition to use a similar approach to maximize the benefits of being part of the animal kingdom while maintaining a detached and scientific perspective on the human species.
Collapse
|
13
|
Watson SK, Filippi P, Gasparri L, Falk N, Tamer N, Widmer P, Manser M, Glock H. Optionality in animal communication: a novel framework for examining the evolution of arbitrariness. Biol Rev Camb Philos Soc 2022; 97:2057-2075. [PMID: 35818133 PMCID: PMC9795909 DOI: 10.1111/brv.12882] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 06/14/2022] [Accepted: 06/16/2022] [Indexed: 12/30/2022]
Abstract
A critical feature of language is that the form of words need not bear any perceptual similarity to their function - these relationships can be 'arbitrary'. The capacity to process these arbitrary form-function associations facilitates the enormous expressive power of language. However, the evolutionary roots of our capacity for arbitrariness, i.e. the extent to which related abilities may be shared with animals, is largely unexamined. We argue this is due to the challenges of applying such an intrinsically linguistic concept to animal communication, and address this by proposing a novel conceptual framework highlighting a key underpinning of linguistic arbitrariness, which is nevertheless applicable to non-human species. Specifically, we focus on the capacity to associate alternative functions with a signal, or alternative signals with a function, a feature we refer to as optionality. We apply this framework to a broad survey of findings from animal communication studies and identify five key dimensions of communicative optionality: signal production, signal adjustment, signal usage, signal combinatoriality and signal perception. We find that optionality is widespread in non-human animals across each of these dimensions, although only humans demonstrate it in all five. Finally, we discuss the relevance of optionality to behavioural and cognitive domains outside of communication. This investigation provides a powerful new conceptual framework for the cross-species investigation of the origins of arbitrariness, and promises to generate original insights into animal communication and language evolution more generally.
Collapse
Affiliation(s)
- Stuart K. Watson
- Department of Comparative Language ScienceUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Department of Evolutionary Biology and Environmental StudiesUniversity of ZurichWinterthurerstrasse 1908057ZurichSwitzerland
| | - Piera Filippi
- Department of Comparative Language ScienceUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Department of PhilosophyUniversity of ZurichZurichbergstrasse 438044ZürichSwitzerland
| | - Luca Gasparri
- Department of PhilosophyUniversity of ZurichZurichbergstrasse 438044ZürichSwitzerland,Univ. Lille, CNRS, UMR 8163 – STL – Savoirs Textes LangageF‐59000LilleFrance
| | - Nikola Falk
- Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Department of Evolutionary Biology and Environmental StudiesUniversity of ZurichWinterthurerstrasse 1908057ZurichSwitzerland
| | - Nicole Tamer
- Department of Comparative Language ScienceUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland
| | - Paul Widmer
- Department of Comparative Language ScienceUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland
| | - Marta Manser
- Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Department of Evolutionary Biology and Environmental StudiesUniversity of ZurichWinterthurerstrasse 1908057ZurichSwitzerland
| | - Hans‐Johann Glock
- Center for the Interdisciplinary Study of Language EvolutionUniversity of ZurichAffolternstrasse 568050ZürichSwitzerland,Department of PhilosophyUniversity of ZurichZurichbergstrasse 438044ZürichSwitzerland
| |
Collapse
|
14
|
Schwartz JW, Gouzoules H. Humans read emotional arousal in monkey vocalizations: evidence for evolutionary continuities in communication. PeerJ 2022; 10:e14471. [PMID: 36518288 PMCID: PMC9744152 DOI: 10.7717/peerj.14471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 11/06/2022] [Indexed: 12/05/2022] Open
Abstract
Humans and other mammalian species communicate emotions in ways that reflect evolutionary conservation and continuity, an observation first made by Darwin. One approach to testing this hypothesis has been to assess the capacity to perceive the emotional content of the vocalizations of other species. Using a binary forced choice task, we tested perception of the emotional intensity represented in coos and screams of infant and juvenile female rhesus macaques (Macaca mulatta) by 113 human listeners without, and 12 listeners with, experience (as researchers or care technicians) with this species. Each stimulus pair contained one high- and one low-arousal vocalization, as measured at the time of recording by stress hormone levels for coos and the degree of intensity of aggression for screams. For coos as well as screams, both inexperienced and experienced participants accurately identified the high-arousal vocalization at significantly above-chance rates. Experience was associated with significantly greater accuracy with scream stimuli but not coo stimuli, and with a tendency to indicate screams as reflecting greater emotional intensity than coos. Neither measures of empathy, human emotion recognition, nor attitudes toward animal welfare showed any relationship with responses. Participants were sensitive to the fundamental frequency, noisiness, and duration of vocalizations; some of these tendencies likely facilitated accurate perceptions, perhaps due to evolutionary homologies in the physiology of arousal and vocal production between humans and macaques. Overall, our findings support a view of evolutionary continuity in emotional vocal communication. We discuss hypotheses about how distinctive dimensions of human nonverbal communication, like the expansion of scream usage across a range of contexts, might influence perceptions of other species' vocalizations.
Collapse
Affiliation(s)
- Jay W. Schwartz
- Department of Psychology, Emory University, Atlanta, GA, United States,Psychological Sciences Department, Western Oregon University, Monmouth, OR, United States
| | - Harold Gouzoules
- Department of Psychology, Emory University, Atlanta, GA, United States
| |
Collapse
|
15
|
Greenall JS, Cornu L, Maigrot AL, de la Torre MP, Briefer EF. Age, empathy, familiarity, domestication and call features enhance human perception of animal emotion expressions. ROYAL SOCIETY OPEN SCIENCE 2022; 9:221138. [PMID: 36483756 PMCID: PMC9727503 DOI: 10.1098/rsos.221138] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 11/14/2022] [Indexed: 06/17/2023]
Abstract
Vocalizations constitute an effective way to communicate both emotional arousal (bodily activation) and valence (negative/positive). There is strong evidence suggesting that the convergence of vocal expression of emotional arousal among animal species occurs, hence enabling cross-species perception of arousal, but it is not clear if the same is true for emotional valence. Here, we conducted a large online survey to test the ability of humans to perceive emotions in the contact calls of several wild and domestic ungulates produced in situations of known emotional arousal (previously validated using either heart rate or locomotion) and valence (validated based on the context of production and behavioural indicators of emotions). Participants (1024 respondents from 48 countries) were able to rate above chance levels the arousal level of vocalizations of three of the six ungulate species and the valence of four of them. Percentages of correct ratings did not differ a lot across species for arousal (49-59%), while they showed much more variation for valence (33-68%). Interestingly, several factors such as age, empathy, familiarity and specific features of the calls enhanced these scores. These findings suggest the existence of a shared emotional system across mammalian species, which is much more pronounced for arousal than valence.
Collapse
Affiliation(s)
- Jasmin Sowerby Greenall
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092 Zurich, Switzerland
| | - Lydia Cornu
- Behavioural Ecology Group, Section for Ecology & Evolution, Department of Biology, University of Copenhagen, 2100 Copenhagen Ø, Denmark
- Wildlife Ecology & Conservation Group, Wageningen University and Research, 6708PB Wageningen, The Netherlands
| | - Anne-Laure Maigrot
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092 Zurich, Switzerland
- Swiss National Stud Farm, Agroscope, Les Longs-Prés, 1580 Avenches, Switzerland
| | | | - Elodie F. Briefer
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092 Zurich, Switzerland
- Behavioural Ecology Group, Section for Ecology & Evolution, Department of Biology, University of Copenhagen, 2100 Copenhagen Ø, Denmark
| |
Collapse
|
16
|
Acoustic regularities in infant-directed speech and song across cultures. Nat Hum Behav 2022; 6:1545-1556. [PMID: 35851843 PMCID: PMC10101735 DOI: 10.1038/s41562-022-01410-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/10/2022] [Indexed: 02/01/2023]
Abstract
When interacting with infants, humans often alter their speech and song in ways thought to support communication. Theories of human child-rearing, informed by data on vocal signalling across species, predict that such alterations should appear globally. Here, we show acoustic differences between infant-directed and adult-directed vocalizations across cultures. We collected 1,615 recordings of infant- and adult-directed speech and song produced by 410 people in 21 urban, rural and small-scale societies. Infant-directedness was reliably classified from acoustic features only, with acoustic profiles of infant-directedness differing across language and music but in consistent fashions. We then studied listener sensitivity to these acoustic features. We played the recordings to 51,065 people from 187 countries, recruited via an English-language website, who guessed whether each vocalization was infant-directed. Their intuitions were more accurate than chance, predictable in part by common sets of acoustic features and robust to the effects of linguistic relatedness between vocalizer and listener. These findings inform hypotheses of the psychological functions and evolution of human communication.
Collapse
|
17
|
Grollero D, Petrolini V, Viola M, Morese R, Lettieri G, Cecchetti L. The structure underlying core affect and perceived affective qualities of human vocal bursts. Cogn Emot 2022; 37:1-17. [PMID: 36300588 DOI: 10.1080/02699931.2022.2139661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Vocal bursts are non-linguistic affectively-laden sounds with a crucial function in human communication, yet their affective structure is still debated. Studies showed that ratings of valence and arousal follow a V-shaped relationship in several kinds of stimuli: high arousal ratings are more likely to go on a par with very negative or very positive valence. Across two studies, we asked participants to listen to 1,008 vocal bursts and judge both how they felt when listening to the sound (i.e. core affect condition), and how the speaker felt when producing it (i.e. perception of affective quality condition). We show that a V-shaped fit outperforms a linear model in explaining the valence-arousal relationship across conditions and studies, even after equating the number of exemplars across emotion categories. Also, although subjective experience can be significantly predicted using affective quality ratings, core affect scores are significantly lower in arousal, less extreme in valence, more variable between individuals, and less reproducible between studies. Nonetheless, stimuli rated with opposite valence between conditions range from 11% (study 1) to 17% (study 2). Lastly, we demonstrate that ambiguity in valence (i.e. high between-participants variability) explains violations of the V-shape and relates to higher arousal.
Collapse
Affiliation(s)
- Demetrio Grollero
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Valentina Petrolini
- Lindy Lab - Language in Neurodiversity, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Spain
| | - Marco Viola
- Department of Philosophy and Education, University of Turin, Turin, Italy
| | - Rosalba Morese
- Faculty of Communication, Culture and Society, Università della Svizzera Italiana, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| | - Giada Lettieri
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Crossmodal Perception and Plasticity Laboratory, IPSY, University of Louvain, Louvain-la-Neuve, Belgium
| | - Luca Cecchetti
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
18
|
Verga L, Sroka MGU, Varola M, Villanueva S, Ravignani A. Spontaneous rhythm discrimination in a mammalian vocal learner. Biol Lett 2022; 18:20220316. [PMID: 36285461 PMCID: PMC9597408 DOI: 10.1098/rsbl.2022.0316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Rhythm and vocal production learning are building blocks of human music and speech. Vocal learning has been hypothesized as a prerequisite for rhythmic capacities. Yet, no mammalian vocal learner but humans have shown the capacity to flexibly and spontaneously discriminate rhythmic patterns. Here we tested untrained rhythm discrimination in a mammalian vocal learning species, the harbour seal (Phoca vitulina). Twenty wild-born seals were exposed to music-like playbacks of conspecific call sequences varying in basic rhythmic properties. These properties were called length, sequence regularity, and overall tempo. All three features significantly influenced seals' reaction (number of looks and their duration), demonstrating spontaneous rhythm discrimination in a vocal learning mammal. This finding supports the rhythm–vocal learning hypothesis and showcases pinnipeds as promising models for comparative research on rhythmic phylogenies.
Collapse
Affiliation(s)
- Laura Verga
- Comparative Bioacoustics Research Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Marlene G. U. Sroka
- Department of Behavioural Biology, University of Münster, Münster, Germany,Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands
| | - Mila Varola
- Comparative Bioacoustics Research Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands
| | - Stella Villanueva
- Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands
| | - Andrea Ravignani
- Comparative Bioacoustics Research Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands,Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
19
|
Leconstant C, Spitz E. Integrative Model of Human-Animal Interactions: A One Health-One Welfare Systemic Approach to Studying HAI. Front Vet Sci 2022; 9:656833. [PMID: 35968006 PMCID: PMC9372562 DOI: 10.3389/fvets.2022.656833] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 06/14/2022] [Indexed: 11/30/2022] Open
Abstract
The Integrative Model of Human-Animal Interactions (IMHAI) described herewith provides a conceptual framework for the study of interspecies interactions and aims to model the primary emotional processes involved in human-animal interactions. This model was developed from theoretical inputs from three fundamental disciplines for understanding interspecies interactions: neuroscience, psychology and ethology, with the objective of providing a transdisciplinary approach on which field professionals and researchers can build and collaborate. Seminal works in affective neuroscience offer a common basis between humans and animals and, as such, can be applied to the study of interspecies interactions from a One Health-One Welfare perspective. On the one hand, Jaak Panksepp's research revealed that primary/basic emotions originate in the deep subcortical regions of the brain and are shared by all mammals, including humans. On the other hand, several works in the field of neuroscience show that the basic physiological state is largely determined by the perception of safety. Thus, emotional expression reflects the state of an individual's permanent adaptation to ever-changing environmental demands. Based on this evidence and over 5 years of action research using grounded theory, alternating between research and practice, the IMHAI proposes a systemic approach to the study of primary-process emotional affects during interspecies social interactions, through the processes of emotional transfer, embodied communication and interactive emotional regulation. IMHAI aims to generate new hypotheses and predictions on affective behavior and interspecies communication. Application of such a model should promote risk prevention and the establishment of positive links between humans and animals thereby contributing to their respective wellbeing.
Collapse
|
20
|
Urquiza-Haas EG, Kotrschal K. Human-Animal Similarity and the Imageability of Mental State Concepts for Mentalizing Animals. JOURNAL OF COGNITION AND CULTURE 2022. [DOI: 10.1163/15685373-12340133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
The attribution of mental states (MS) to other species typically follows a scala naturae pattern. However, “simple” mental states, including emotions, sensing, and feelings are attributed to a wider range of animals as compared to the so-called “higher” cognitive abilities. We propose that such attributions are based on the perceptual quality (i.e. imageability) of mental representations related to MS concepts. We hypothesized that the attribution of highly imaginable MS is more dependent on the familiarity of participants with animals when compared to the attribution of MS low in imageability. In addition, we also assessed how animal agreeableness, familiarity with animals, and the type of human-animal interaction related to the judged similarity of animals to humans. Sixty-one participants (19 females, 42 males) with a rural (n = 20) and urban (n = 41) background rated twenty-six wild and domestic animals for their perceived similarity with humans and ability to experience a set of MS: (1) Highly imageable MS: joy, anger, and fear, and (2) MS low in imageability: capacity to plan and deceive. Results show that more agreeable and familiar animals were considered more human-like. Primates, followed by carnivores, suines, ungulates, and rodents were rated more human-like than xenarthrans, birds, arthropods, and reptiles. Higher MS ratings were given to more similar animals and more so if the MS attributed were high in imageability. Familiarity with animals was only relevant for the attribution of the MS high in imageability.
Collapse
Affiliation(s)
- Esmeralda G. Urquiza-Haas
- PhD candidate, Department of Cognitive Biology and Department of Behavioural Biology, University of Vienna Vienna Austria
| | - Kurt Kotrschal
- Retired Professor, Department of Behavioural Biology, University of Vienna Vienna Austria
| |
Collapse
|
21
|
Lau JCY, Patel S, Kang X, Nayar K, Martin GE, Choy J, Wong PCM, Losh M. Cross-linguistic patterns of speech prosodic differences in autism: A machine learning study. PLoS One 2022; 17:e0269637. [PMID: 35675372 PMCID: PMC9176813 DOI: 10.1371/journal.pone.0269637] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 05/24/2022] [Indexed: 11/19/2022] Open
Abstract
Differences in speech prosody are a widely observed feature of Autism Spectrum Disorder (ASD). However, it is unclear how prosodic differences in ASD manifest across different languages that demonstrate cross-linguistic variability in prosody. Using a supervised machine-learning analytic approach, we examined acoustic features relevant to rhythmic and intonational aspects of prosody derived from narrative samples elicited in English and Cantonese, two typologically and prosodically distinct languages. Our models revealed successful classification of ASD diagnosis using rhythm-relative features within and across both languages. Classification with intonation-relevant features was significant for English but not Cantonese. Results highlight differences in rhythm as a key prosodic feature impacted in ASD, and also demonstrate important variability in other prosodic properties that appear to be modulated by language-specific differences, such as intonation.
Collapse
Affiliation(s)
- Joseph C. Y. Lau
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Shivani Patel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Xin Kang
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong S.A.R., China
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong S.A.R., China
- Research Centre for Language, Cognition and Language Application, Chongqing University, Chongqing, China
- School of Foreign Languages and Cultures, Chongqing University, Chongqing, China
| | - Kritika Nayar
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Gary E. Martin
- Department of Communication Sciences and Disorders, St. John’s University, Staten Island, New York, United States of America
| | - Jason Choy
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong S.A.R., China
| | - Patrick C. M. Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong S.A.R., China
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong S.A.R., China
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| |
Collapse
|
22
|
Maigrot AL, Hillmann E, Briefer EF. Cross-species discrimination of vocal expression of emotional valence by Equidae and Suidae. BMC Biol 2022; 20:106. [PMID: 35606806 PMCID: PMC9128205 DOI: 10.1186/s12915-022-01311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 04/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Discrimination and perception of emotion expression regulate interactions between conspecifics and can lead to emotional contagion (state matching between producer and receiver) or to more complex forms of empathy (e.g., sympathetic concern). Empathy processes are enhanced by familiarity and physical similarity between partners. Since heterospecifics can also be familiar with each other to some extent, discrimination/perception of emotions and, as a result, emotional contagion could also occur between species. RESULTS Here, we investigated if four species belonging to two ungulate Families, Equidae (domestic and Przewalski's horses) and Suidae (pigs and wild boars), can discriminate between vocalizations of opposite emotional valence (positive or negative), produced not only by conspecifics, but also closely related heterospecifics and humans. To this aim, we played back to individuals of these four species, which were all habituated to humans, vocalizations from a unique set of recordings for which the valence associated with vocal production was known. We found that domestic and Przewalski's horses, as well as pigs, but not wild boars, reacted more strongly when the first vocalization played was negative compared to positive, regardless of the species broadcasted. CONCLUSIONS Domestic horses, Przewalski's horses and pigs thus seem to discriminate between positive and negative vocalizations produced not only by conspecifics, but also by heterospecifics, including humans. In addition, we found an absence of difference between the strength of reaction of the four species to the calls of conspecifics and closely related heterospecifics, which could be related to similarities in the general structure of their vocalization. Overall, our results suggest that phylogeny and domestication have played a role in cross-species discrimination/perception of emotions.
Collapse
Affiliation(s)
- Anne-Laure Maigrot
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092, Zurich, Switzerland.,Division of Animal Welfare, Veterinary Public Health Institute, Vetsuisse Faculty, University of Bern, Länggassstrasse 120, 3012, Bern, Switzerland.,Swiss National Stud Farm, Agroscope, Les Longs-Prés, 1580, Avenches, Switzerland
| | - Edna Hillmann
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092, Zurich, Switzerland.,Animal Husbandry and Ethology, Albrecht Daniel Thaer-Institut, Faculty of Life Sciences, Humboldt-Universität zu Berlin, Philippstrasse 13, 10115, Berlin, Germany
| | - Elodie F Briefer
- Institute of Agricultural Sciences, ETH Zürich, Universitätsstrasse 2, 8092, Zurich, Switzerland. .,Centre for Proper Housing of Ruminants and Pigs, Federal Food Safety and Veterinary Office, Agroscope, Tänikon, 8356, Ettenhausen, Switzerland. .,Department of Biology, Behavioral Ecology Group, Section for Ecology & Evolution, University of Copenhagen, 2100, Copenhagen Ø, Denmark.
| |
Collapse
|
23
|
Reybrouck M, Eerola T. Musical Enjoyment and Reward: From Hedonic Pleasure to Eudaimonic Listening. Behav Sci (Basel) 2022; 12:bs12050154. [PMID: 35621451 PMCID: PMC9137732 DOI: 10.3390/bs12050154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 05/11/2022] [Accepted: 05/17/2022] [Indexed: 02/04/2023] Open
Abstract
This article is a hypothesis and theory paper. It elaborates on the possible relation between music as a stimulus and its possible effects, with a focus on the question of why listeners are experiencing pleasure and reward. Though it is tempting to seek for a causal relationship, this has proven to be elusive given the many intermediary variables that intervene between the actual impingement on the senses and the reactions/responses by the listener. A distinction can be made, however, between three elements: (i) an objective description of the acoustic features of the music and their possible role as elicitors; (ii) a description of the possible modulating factors—both external/exogenous and internal/endogenous ones; and (iii) a continuous and real-time description of the responses by the listener, both in terms of their psychological reactions and their physiological correlates. Music listening, in this broadened view, can be considered as a multivariate phenomenon of biological, psychological, and cultural factors that, together, shape the overall, full-fledged experience. In addition to an overview of the current and extant research on musical enjoyment and reward, we draw attention to some key methodological problems that still complicate a full description of the musical experience. We further elaborate on how listening may entail both adaptive and maladaptive ways of coping with the sounds, with the former allowing a gentle transition from mere hedonic pleasure to eudaimonic enjoyment.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, Faculty of Arts, KU Leuven—University of Leuven, 3000 Leuven, Belgium
- Department of Art History, Musicology and Theatre Studies, Institute for Psychoacoustics and Electronic Music (IPEM), 9000 Ghent, Belgium
- Correspondence:
| | - Tuomas Eerola
- Department of Music, Durham University, Durham DH1 3RL, UK;
| |
Collapse
|
24
|
Massenet M, Anikin A, Pisanski K, Reynaud K, Mathevon N, Reby D. Nonlinear vocal phenomena affect human perceptions of distress, size and dominance in puppy whines. Proc Biol Sci 2022; 289:20220429. [PMID: 35473375 PMCID: PMC9043735 DOI: 10.1098/rspb.2022.0429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
While nonlinear phenomena (NLP) are widely reported in animal vocalizations, often causing perceptual harshness and roughness, their communicative function remains debated. Several hypotheses have been put forward: attention-grabbing, communication of distress, exaggeration of body size and dominance. Here, we use state-of-the-art sound synthesis to investigate how NLP affect the perception of puppy whines by human listeners. Listeners assessed the distress, size or dominance conveyed by synthetic puppy whines with manipulated NLP, including frequency jumps and varying proportions of subharmonics, sidebands and deterministic chaos. We found that the presence of chaos increased the puppy's perceived level of distress and that this effect held across a range of representative fundamental frequency (fo) levels. Adding sidebands and subharmonics also increased perceived distress among listeners who have extensive caregiving experience with pre-weaned puppies (e.g. breeders, veterinarians). Finally, we found that whines with added chaos, subharmonics or sidebands were associated with larger and more dominant puppies, although these biases were attenuated in experienced caregivers. Together, our results show that nonlinear phenomena in puppy whines can convey rich information to human listeners and therefore may be crucial for offspring survival during breeding of a domesticated species.
Collapse
Affiliation(s)
- Mathilde Massenet
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,Division of Cognitive Science, University of Lund, 22100 Lund, Sweden
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,CNRS, French National Centre for Scientific Research, Laboratoire de Dynamique du Langage, University of Lyon 2, 69007 Lyon, France
| | - Karine Reynaud
- École Nationale Vétérinaire d'Alfort, EnvA, 94700 Maisons-Alfort, France.,Physiologie de la Reproduction et des Comportements, CNRS, IFCE, INRAE, University of Tours, PRC, Nouzilly, France
| | - Nicolas Mathevon
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,Institut universitaire de France, Paris, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France.,Institut universitaire de France, Paris, France
| |
Collapse
|
25
|
Kriengwatana BP, Mott R, ten Cate C. Music for animal welfare: a critical review & conceptual framework. Appl Anim Behav Sci 2022. [DOI: 10.1016/j.applanim.2022.105641] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
26
|
Carlier C, Niemeijer K, Mestdagh M, Bauwens M, Vanbrabant P, Geurts L, van Waterschoot T, Kuppens P. In Search of State and Trait Emotion Markers in Mobile-Sensed Language: Field Study. JMIR Ment Health 2022; 9:e31724. [PMID: 35147507 PMCID: PMC8881775 DOI: 10.2196/31724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 09/21/2021] [Accepted: 10/08/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Emotions and mood are important for overall well-being. Therefore, the search for continuous, effortless emotion prediction methods is an important field of study. Mobile sensing provides a promising tool and can capture one of the most telling signs of emotion: language. OBJECTIVE The aim of this study is to examine the separate and combined predictive value of mobile-sensed language data sources for detecting both momentary emotional experience as well as global individual differences in emotional traits and depression. METHODS In a 2-week experience sampling method study, we collected self-reported emotion ratings and voice recordings 10 times a day, continuous keyboard activity, and trait depression severity. We correlated state and trait emotions and depression and language, distinguishing between speech content (spoken words), speech form (voice acoustics), writing content (written words), and writing form (typing dynamics). We also investigated how well these features predicted state and trait emotions using cross-validation to select features and a hold-out set for validation. RESULTS Overall, the reported emotions and mobile-sensed language demonstrated weak correlations. The most significant correlations were found between speech content and state emotions and between speech form and state emotions, ranging up to 0.25. Speech content provided the best predictions for state emotions. None of the trait emotion-language correlations remained significant after correction. Among the emotions studied, valence and happiness displayed the most significant correlations and the highest predictive performance. CONCLUSIONS Although using mobile-sensed language as an emotion marker shows some promise, correlations and predictive R2 values are low.
Collapse
Affiliation(s)
- Chiara Carlier
- Department of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Koen Niemeijer
- Department of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Merijn Mestdagh
- Department of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Michael Bauwens
- Department of Smart Organisations, University College Leuven-Limburg, Heverlee, Belgium
| | - Peter Vanbrabant
- Department of Smart Organisations, University College Leuven-Limburg, Heverlee, Belgium
| | - Luc Geurts
- Department of Computer Science, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Toon van Waterschoot
- Department of Electrical Engineering, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Peter Kuppens
- Department of Psychology and Educational Sciences, Katholieke Universiteit Leuven, Leuven, Belgium
| |
Collapse
|
27
|
Reybrouck M, Podlipniak P, Welch D. Music Listening and Homeostatic Regulation: Surviving and Flourishing in a Sonic World. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 19:ijerph19010278. [PMID: 35010538 PMCID: PMC8751057 DOI: 10.3390/ijerph19010278] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/10/2021] [Accepted: 12/20/2021] [Indexed: 01/01/2023]
Abstract
This paper argues for a biological conception of music listening as an evolutionary achievement that is related to a long history of cognitive and affective-emotional functions, which are grounded in basic homeostatic regulation. Starting from the three levels of description, the acoustic description of sounds, the neurological level of processing, and the psychological correlates of neural stimulation, it conceives of listeners as open systems that are in continuous interaction with the sonic world. By monitoring and altering their current state, they can try to stay within the limits of operating set points in the pursuit of a controlled state of dynamic equilibrium, which is fueled by interoceptive and exteroceptive sources of information. Listening, in this homeostatic view, can be adaptive and goal-directed with the aim of maintaining the internal physiology and directing behavior towards conditions that make it possible to thrive by seeking out stimuli that are valued as beneficial and worthy, or by attempting to avoid those that are annoying and harmful. This calls forth the mechanisms of pleasure and reward, the distinction between pleasure and enjoyment, the twin notions of valence and arousal, the affect-related consequences of music listening, the role of affective regulation and visceral reactions to the sounds, and the distinction between adaptive and maladaptive listening.
Collapse
Affiliation(s)
- Mark Reybrouck
- Faculty of Arts, University of Leuven, 3000 Leuven, Belgium
- Department of Art History, Musicology and Theater Studies, IPEM Institute for Psychoacoustics and Electronic Music, 9000 Ghent, Belgium
- Correspondence:
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, 61-712 Poznan, Poland;
| | - David Welch
- Institute Audiology Section, School of Population Health, University of Auckland, Auckland 2011, New Zealand;
| |
Collapse
|
28
|
Matzinger T, Fitch WT. Voice modulatory cues to structure across languages and species. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200393. [PMID: 34719253 PMCID: PMC8558770 DOI: 10.1098/rstb.2020.0393] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2021] [Indexed: 12/21/2022] Open
Abstract
Voice modulatory cues such as variations in fundamental frequency, duration and pauses are key factors for structuring vocal signals in human speech and vocal communication in other tetrapods. Voice modulation physiology is highly similar in humans and other tetrapods due to shared ancestry and shared functional pressures for efficient communication. This has led to similarly structured vocalizations across humans and other tetrapods. Nonetheless, in their details, structural characteristics may vary across species and languages. Because data concerning voice modulation in non-human tetrapod vocal production and especially perception are relatively scarce compared to human vocal production and perception, this review focuses on voice modulatory cues used for speech segmentation across human languages, highlighting comparative data where available. Cues that are used similarly across many languages may help indicate which cues may result from physiological or basic cognitive constraints, and which cues may be employed more flexibly and are shaped by cultural evolution. This suggests promising candidates for future investigation of cues to structure in non-human tetrapod vocalizations. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Theresa Matzinger
- Department of Behavioral and Cognitive Biology, University of Vienna, 1030 Vienna, Austria
- Department of English, University of Vienna, 1090 Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, 1030 Vienna, Austria
- Department of English, University of Vienna, 1090 Vienna, Austria
| |
Collapse
|
29
|
Reybrouck M, Vuust P, Brattico E. Neural Correlates of Music Listening: Does the Music Matter? Brain Sci 2021; 11:1553. [PMID: 34942855 PMCID: PMC8699514 DOI: 10.3390/brainsci11121553] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/16/2021] [Accepted: 11/18/2021] [Indexed: 11/29/2022] Open
Abstract
The last decades have seen a proliferation of music and brain studies, with a major focus on plastic changes as the outcome of continuous and prolonged engagement with music. Thanks to the advent of neuroaesthetics, research on music cognition has broadened its scope by considering the multifarious phenomenon of listening in all its forms, including incidental listening up to the skillful attentive listening of experts, and all its possible effects. These latter range from objective and sensorial effects directly linked to the acoustic features of the music to the subjectively affective and even transformational effects for the listener. Of special importance is the finding that neural activity in the reward circuit of the brain is a key component of a conscious listening experience. We propose that the connection between music and the reward system makes music listening a gate towards not only hedonia but also eudaimonia, namely a life well lived, full of meaning that aims at realizing one's own "daimon" or true nature. It is argued, further, that music listening, even when conceptualized in this aesthetic and eudaimonic framework, remains a learnable skill that changes the way brain structures respond to sounds and how they interact with each other.
Collapse
Affiliation(s)
- Mark Reybrouck
- Faculty of Arts, University of Leuven, 3000 Leuven, Belgium
- Department of Art History, Musicology and Theater Studies, IPEM Institute for Psychoacoustics and Electronic Music, 9000 Ghent, Belgium
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, 8000 Aarhus, Denmark; (P.V.); (E.B.)
- The Royal Academy of Music Aarhus/Aalborg, 8000 Aarhus, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, 8000 Aarhus, Denmark; (P.V.); (E.B.)
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy
| |
Collapse
|
30
|
Effect of pitch range on dogs' response to conspecific vs. heterospecific distress cries. Sci Rep 2021; 11:19723. [PMID: 34611191 PMCID: PMC8492669 DOI: 10.1038/s41598-021-98967-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 09/02/2021] [Indexed: 11/08/2022] Open
Abstract
Distress cries are emitted by many mammal species to elicit caregiving attention. Across taxa, these calls tend to share similar acoustic structures, but not necessarily frequency range, raising the question of their interspecific communicative potential. As domestic dogs are highly responsive to human emotional cues and experience stress when hearing human cries, we explore whether their responses to distress cries from human infants and puppies depend upon sharing conspecific frequency range or species-specific call characteristics. We recorded adult dogs' responses to distress cries from puppies and human babies, emitted from a loudspeaker in a basket. The frequency of the cries was presented in both their natural range and also shifted to match the other species. Crucially, regardless of species origin, calls falling into the dog call-frequency range elicited more attention. Thus, domestic dogs' responses depended strongly on the frequency range. Females responded both faster and more strongly than males, potentially reflecting asymmetries in parental care investment. Our results suggest that, despite domestication leading to an increased overall responsiveness to human cues, dogs still respond considerably less to calls in the natural human infant range than puppy range. Dogs appear to use a fast but inaccurate decision-making process to determine their response to distress-like vocalisations.
Collapse
|
31
|
Probert R, Bastian A, Elwen SH, James BS, Gridley T. Vocal correlates of arousal in bottlenose dolphins (Tursiops spp.) in human care. PLoS One 2021; 16:e0250913. [PMID: 34469449 PMCID: PMC8409691 DOI: 10.1371/journal.pone.0250913] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 08/19/2021] [Indexed: 02/02/2023] Open
Abstract
Human-controlled regimes can entrain behavioural responses and may impact animal welfare. Therefore, understanding the influence of schedules on animal behaviour can be a valuable tool to improve welfare, however information on behaviour overnight and in the absence of husbandry staff remains rare. Bottlenose dolphins (Tursiops spp.) are highly social marine mammals and the most common cetacean found in captivity. They communicate using frequency modulated signature whistles, a whistle type that is individually distinctive and used as a contact call. We investigated the vocalisations of ten dolphins housed in three social groups at uShaka Sea World dolphinarium to determine how patterns in acoustic behaviour link to dolphinarium routines. Investigation focused on overnight behaviour, housing decisions, weekly patterns, and transitional periods between the presence and absence of husbandry staff. Recordings were made from 17h00 - 07h00 over 24 nights, spanning May to August 2018. Whistle (including signature whistle) presence and production rate decreased soon after husbandry staff left the facility, was low over night, and increased upon staff arrival. Results indicated elevated arousal states particularly associated with the morning feeding regime. Housing in the pool configuration that allowed observation of staff activities from all social groups was characterised by an increase in whistle presence and rates. Heightened arousal associated with staff presence was reflected in the structural characteristics of signature whistles, particularly maximum frequency, frequency range and number of whistle loops. We identified individual differences in both production rate and the structural modification of signature whistles under different contexts. Overall, these results revealed a link between scheduled activity and associated behavioural responses, which can be used as a baseline for future welfare monitoring where changes from normal behaviour may reflect shifts in welfare state.
Collapse
Affiliation(s)
- Rachel Probert
- Department of Agriculture, Engineering and Science, School of Life Sciences, University of KwaZulu-Natal, Durban, South Africa
- Sea Search Research and Conservation NPC, Cape Town, South Africa
- * E-mail:
| | - Anna Bastian
- Department of Agriculture, Engineering and Science, School of Life Sciences, University of KwaZulu-Natal, Durban, South Africa
| | - Simon H. Elwen
- Sea Search Research and Conservation NPC, Cape Town, South Africa
- Department of Botany and Zoology, Faculty of Science, Stellenbosch University, Stellenbosch, South Africa
| | - Bridget S. James
- Sea Search Research and Conservation NPC, Cape Town, South Africa
- Department of Botany and Zoology, Faculty of Science, Stellenbosch University, Stellenbosch, South Africa
| | - Tess Gridley
- Sea Search Research and Conservation NPC, Cape Town, South Africa
- Department of Botany and Zoology, Faculty of Science, Stellenbosch University, Stellenbosch, South Africa
- Department of Statistical Sciences, Centre for Statistics in Ecology, Environment and Conservation, University of Cape Town, Cape Town, Western Cape, South Africa
| |
Collapse
|
32
|
Ćwiek A, Fuchs S, Draxler C, Asu EL, Dediu D, Hiovain K, Kawahara S, Koutalidis S, Krifka M, Lippus P, Lupyan G, Oh GE, Paul J, Petrone C, Ridouane R, Reiter S, Schümchen N, Szalontai Á, Ünal-Logacev Ö, Zeller J, Winter B, Perlman M. Novel vocalizations are understood across cultures. Sci Rep 2021; 11:10108. [PMID: 33980933 PMCID: PMC8115676 DOI: 10.1038/s41598-021-89445-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/27/2021] [Indexed: 11/21/2022] Open
Abstract
Linguistic communication requires speakers to mutually agree on the meanings of words, but how does such a system first get off the ground? One solution is to rely on iconic gestures: visual signs whose form directly resembles or otherwise cues their meaning without any previously established correspondence. However, it is debated whether vocalizations could have played a similar role. We report the first extensive cross-cultural study investigating whether people from diverse linguistic backgrounds can understand novel vocalizations for a range of meanings. In two comprehension experiments, we tested whether vocalizations produced by English speakers could be understood by listeners from 28 languages from 12 language families. Listeners from each language were more accurate than chance at guessing the intended referent of the vocalizations for each of the meanings tested. Our findings challenge the often-cited idea that vocalizations have limited potential for iconic representation, demonstrating that in the absence of words people can use vocalizations to communicate a variety of meanings.
Collapse
Affiliation(s)
- Aleksandra Ćwiek
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany. .,Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, 10099, Berlin, Germany.
| | - Susanne Fuchs
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany
| | - Christoph Draxler
- Institute of Phonetics and Speech Processing, Ludwig Maximilian University, 80799, Munich, Germany
| | - Eva Liina Asu
- Institute of Estonian and General Linguistics, University of Tartu, 50090, Tartu, Estonia
| | - Dan Dediu
- Laboratoire Dynamique Du Langage UMR 5596, Université Lumière Lyon 2, 69363, Lyon, France
| | - Katri Hiovain
- Department of Digital Humanities, University of Helsinki, 00014, Helsinki, Finland
| | - Shigeto Kawahara
- The Institute of Cultural and Linguistic Studies, Keio University, Mita Minatoku, Tokyo, 108-8345, Japan
| | - Sofia Koutalidis
- Faculty of Linguistics and Literary Studies, Bielefeld University, 33615, Bielefeld, Germany
| | - Manfred Krifka
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany.,Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, 10099, Berlin, Germany
| | - Pärtel Lippus
- Institute of Estonian and General Linguistics, University of Tartu, 50090, Tartu, Estonia
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Grace E Oh
- Department of English Language and Literature, Konkuk University, Seoul, 05029, South Korea
| | - Jing Paul
- Asian Studies Program, Agnes Scott College, Decatur, GA, 30030, USA
| | - Caterina Petrone
- Aix-Marseille Université, CNRS, Laboratoire Parole et Langage, UMR 7309, 13100, Aix-en-Provence, France
| | - Rachid Ridouane
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS & Sorbonne Nouvelle, 75005, Paris, France
| | - Sabine Reiter
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117, Berlin, Germany
| | - Nathalie Schümchen
- Department of Language and Communication, University of Southern Denmark, 5230, Odense, Denmark
| | - Ádám Szalontai
- Department of Phonetics, Hungarian Research Centre for Linguistics, Budapest, 1068, Hungary
| | - Özlem Ünal-Logacev
- School of Health Sciences, Department of Speech and Language Therapy, Istanbul Medipol University, 34810, Istanbul, Turkey
| | - Jochen Zeller
- School of Arts, Linguistics Discipline, University of KwaZulu-Natal, Durban, 4041, South Africa
| | - Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, B15 2TT, UK
| | - Marcus Perlman
- Department of English Language & Linguistics, University of Birmingham, Birmingham, B15 2TT, UK
| |
Collapse
|
33
|
Bainbridge CM, Bertolo M, Youngers J, Atwood S, Yurdum L, Simson J, Lopez K, Xing F, Martin A, Mehr SA. Infants relax in response to unfamiliar foreign lullabies. Nat Hum Behav 2021; 5:256-264. [PMID: 33077883 PMCID: PMC8220405 DOI: 10.1038/s41562-020-00963-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 09/11/2020] [Indexed: 12/14/2022]
Abstract
Music is characterized by acoustic forms that are predictive of its behavioural functions. For example, adult listeners accurately identify unfamiliar lullabies as infant-directed on the basis of their musical features alone. This property could reflect a function of listeners' experiences, the basic design of the human mind, or both. Here, we show that US infants (N = 144) relax in response to eight unfamiliar foreign lullabies, relative to matched non-lullaby songs from other foreign societies, as indexed by heart rate, pupillometry and electrodermal activity. They do so consistently throughout the first year of life, suggesting that the response is not a function of their musical experiences, which are limited relative to those of adults. The infants' parents overwhelmingly chose lullabies as the songs that they would use to calm their fussy infant, despite their unfamiliarity. Together, these findings suggest that infants may be predisposed to respond to common features of lullabies found in different cultures.
Collapse
Affiliation(s)
| | - Mila Bertolo
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Julie Youngers
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - S Atwood
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Psychology, University of Washington, Seattle, WA, USA
| | - Lidya Yurdum
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Jan Simson
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Kelsie Lopez
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Feng Xing
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Education, Johns Hopkins University, Baltimore, MD, USA
| | - Alia Martin
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Samuel A Mehr
- Department of Psychology, Harvard University, Cambridge, MA, USA.
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand.
- Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
34
|
Abstract
Evidence supporting a link between harmoni-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.
Collapse
|
35
|
Abstract
Researchers examining nonverbal communication of emotions are becoming increasingly interested in differentiations between different positive emotional states like interest, relief, and pride. But despite the importance of the voice in communicating emotion in general and positive emotion in particular, there is to date no systematic review of what characterizes vocal expressions of different positive emotions. Furthermore, integration and synthesis of current findings are lacking. In this review, we comprehensively review studies (N = 108) investigating acoustic features relating to specific positive emotions in speech prosody and nonverbal vocalizations. We find that happy voices are generally loud with considerable variability in loudness, have high and variable pitch, and are high in the first two formant frequencies. When specific positive emotions are directly compared with each other, pitch mean, loudness mean, and speech rate differ across positive emotions, with patterns mapping onto clusters of emotions, so-called emotion families. For instance, pitch is higher for epistemological emotions (amusement, interest, relief), moderate for savouring emotions (contentment and pleasure), and lower for a prosocial emotion (admiration). Some, but not all, of the differences in acoustic patterns also map on to differences in arousal levels. We end by pointing to limitations in extant work and making concrete proposals for future research on positive emotions in the voice.
Collapse
|
36
|
Filippi P. Emotional Voice Intonation: A Communication Code at the Origins of Speech Processing and Word-Meaning Associations? JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00337-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Abstract
The aim of the present work is to investigate the facilitating effect of vocal emotional intonation on the evolution of the following processes involved in language: (a) identifying and producing phonemes, (b) processing compositional rules underlying vocal utterances, and (c) associating vocal utterances with meanings. To this end, firstly, I examine research on the presence of these abilities in animals, and the biologically ancient nature of emotional vocalizations. Secondly, I review research attesting to the facilitating effect of emotional voice intonation on these abilities in humans. Thirdly, building on these studies in animals and humans, and through taking an evolutionary perspective, I provide insights for future empirical work on the facilitating effect of emotional intonation on these three processes in animals and preverbal humans. In this work, I highlight the importance of a comparative approach to investigate language evolution empirically. This review supports Darwin’s hypothesis, according to which the ability to express emotions through voice modulation was a key step in the evolution of spoken language.
Collapse
|
37
|
Abstract
Prior investigations have demonstrated that people tend to link pseudowords such as bouba to rounded shapes and kiki to spiky shapes, but the cognitive processes underlying this matching bias have remained controversial. Here, we present three experiments underscoring the fundamental role of emotional mediation in this sound–shape mapping. Using stimuli from key previous studies, we found that kiki-like pseudowords and spiky shapes, compared with bouba-like pseudowords and rounded shapes, consistently elicit higher levels of affective arousal, which we assessed through both subjective ratings (Experiment 1, N = 52) and acoustic models implemented on the basis of pseudoword material (Experiment 2, N = 70). Crucially, the mediating effect of arousal generalizes to novel pseudowords (Experiment 3, N = 64, which was preregistered). These findings highlight the role that human emotion may play in language development and evolution by grounding associations between abstract concepts (e.g., shapes) and linguistic signs (e.g., words) in the affective system.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Education and Psychology, Freie Universität Berlin.,Center for Cognitive Neuroscience Berlin, Freie Universität Berlin
| | | | - Morten H Christiansen
- Department of Psychology, Cornell University.,Interacting Minds Centre, Aarhus University.,School of Communication and Culture, Aarhus University
| |
Collapse
|
38
|
Kamiloğlu RG, Slocombe KE, Haun DBM, Sauter DA. Human listeners' perception of behavioural context and core affect dimensions in chimpanzee vocalizations. Proc Biol Sci 2020; 287:20201148. [PMID: 32546102 PMCID: PMC7329049 DOI: 10.1098/rspb.2020.1148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners (n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants (n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.
Collapse
Affiliation(s)
- Roza G Kamiloğlu
- Department of Psychology, University of Amsterdam, REC G, Nieuwe Achtergracht 129B, 1001 NK, Amsterdam, The Netherlands
| | | | - Daniel B M Haun
- Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Disa A Sauter
- Department of Psychology, University of Amsterdam, REC G, Nieuwe Achtergracht 129B, 1001 NK, Amsterdam, The Netherlands
| |
Collapse
|
39
|
Artificial sounds following biological rules: A novel approach for non-verbal communication in HRI. Sci Rep 2020; 10:7080. [PMID: 32341387 PMCID: PMC7184580 DOI: 10.1038/s41598-020-63504-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 03/11/2020] [Indexed: 11/29/2022] Open
Abstract
Emotionally expressive non-verbal vocalizations can play a major role in human-robot interactions. Humans can assess the intensity and emotional valence of animal vocalizations based on simple acoustic features such as call length and fundamental frequency. These simple encoding rules are suggested to be general across terrestrial vertebrates. To test the degree of this generalizability, our aim was to synthesize a set of artificial sounds by systematically changing the call length and fundamental frequency, and examine how emotional valence and intensity is attributed to them by humans. Based on sine wave sounds, we generated sound samples in seven categories by increasing complexity via incorporating different characteristics of animal vocalizations. We used an online questionnaire to measure the perceived emotional valence and intensity of the sounds in a two-dimensional model of emotions. The results show that sounds with low fundamental frequency and shorter call lengths were considered to have a more positive valence, and samples with high fundamental frequency were rated as more intense across all categories, regardless of the sound complexity. We conclude that applying the basic rules of vocal emotion encoding can be a good starting point for the development of novel non-verbal vocalizations for artificial agents.
Collapse
|
40
|
Abstract
To ensure that listeners pay attention and do not habituate, emotionally intense vocalizations may be under evolutionary pressure to exploit processing biases in the auditory system by maximising their bottom-up salience. This "salience code" hypothesis was tested using 128 human nonverbal vocalizations representing eight emotions: amusement, anger, disgust, effort, fear, pain, pleasure, and sadness. As expected, within each emotion category salience ratings derived from pairwise comparisons strongly correlated with perceived emotion intensity. For example, while laughs as a class were less salient than screams of fear, salience scores almost perfectly explained the perceived intensity of both amusement and fear considered separately. Validating self-rated salience evaluations, high- vs. low-salience sounds caused 25% more recall errors in a short-term memory task, whereas emotion intensity had no independent effect on recall errors. Furthermore, the acoustic characteristics of salient vocalizations were similar to those previously described for non-emotional sounds (greater duration and intensity, high pitch, bright timbre, rapid modulations, and variable spectral characteristics), confirming that vocalizations were not salient merely because of their emotional content. The acoustic code in nonverbal communication is thus aligned with sensory biases, offering a general explanation for some non-arbitrary properties of human and animal high-arousal vocalizations.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, Lund, Sweden
| |
Collapse
|
41
|
Hissing like a snake: bird hisses are similar to snake hisses and prompt similar anxiety behavior in a mammalian model. Behav Ecol Sociobiol 2019. [DOI: 10.1007/s00265-019-2778-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
42
|
Was That a Scream? Listener Agreement and Major Distinguishing Acoustic Features. JOURNAL OF NONVERBAL BEHAVIOR 2019. [DOI: 10.1007/s10919-019-00325-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
43
|
Massen JJ, Behrens F, Martin JS, Stocker M, Brosnan SF. A comparative approach to affect and cooperation. Neurosci Biobehav Rev 2019; 107:370-387. [DOI: 10.1016/j.neubiorev.2019.09.027] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 09/16/2019] [Accepted: 09/19/2019] [Indexed: 12/31/2022]
|
44
|
Hoemann K, Crittenden AN, Msafiri S, Liu Q, Li C, Roberson D, Ruark GA, Gendron M, Barrett LF. Context facilitates performance on a classic cross-cultural emotion perception task. Emotion 2019; 19:1292-1313. [PMID: 30475026 PMCID: PMC6535382 DOI: 10.1037/emo0000501] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The majority of studies designed to assess cross-cultural emotion perception use a choice-from-array task in which participants are presented with brief emotion stories and asked to choose between target and foil cues. This task has been widely criticized, evoking a lively and prolonged debate about whether it inadvertently helps participants to perform better than they otherwise would, resulting in the appearance of universality. In 3 studies, we provide a strong test of the hypothesis that the classic choice-from-array task constitutes a potent source of context that shapes performance. Participants from a remote small-scale (the Hadza hunter-gatherers of Tanzania) and 2 urban industrialized (China and the United States) cultural samples selected target vocalizations that were contrived for 6 non-English, nonuniversal emotion categories at levels significantly above chance. In studies of anger, disgust, fear, happiness, sadness, and surprise, above chance performance is interpreted as evidence of universality. These studies support the hypothesis that choice-from-array tasks encourage evidence for cross-cultural emotion perception. We discuss these findings with reference to the history of cross-cultural emotion perception studies, and suggest several processes that may, together, give rise to the appearance of universal emotions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Katie Hoemann
- Northeastern University, Department of Psychology, 360
Huntington Ave, Boston, MA, USA 02115
| | - Alyssa N. Crittenden
- University of Nevada, Las Vegas, Department of
Anthropology, 4505 S. Maryland Pkwy, Las Vegas, NV, USA 89154
| | | | - Qiang Liu
- Liaoning Normal University, Research Center of Brain and
Cognitive Neuroscience, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, China
116021
| | - Chaojie Li
- Liaoning Normal University, Research Center of Brain and
Cognitive Neuroscience, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, China
116021
| | - Debi Roberson
- University of Essex, Department of Psychology, Wivenhoe
Park, Colchester, England, UK CO4 3SQ
| | - Gregory A. Ruark
- U.S. Army Research Institute for the Behavioral and
Social Sciences, Foundational Science Research Unit (FSRU), 6000 6 St
(Bldg 1464/Mail Stop 5610), Fort Belvoir, VA, USA 22060-5610
| | - Maria Gendron
- Northeastern University, Department of Psychology, 360
Huntington Ave, Boston, MA, USA 02115
| | - Lisa Feldman Barrett
- Northeastern University, Department of Psychology, 360
Huntington Ave, Boston, MA, USA 02115
- Massachusetts General Hospital/Department of Psychiatry
and Martinos Center for Biomedical Imaging, 149 13 St, Charlestown, MA,
USA 02129
| |
Collapse
|
45
|
Filippi P, Hoeschele M, Spierings M, Bowling DL. Temporal modulation in speech, music, and animal vocal communication: evidence of conserved function. Ann N Y Acad Sci 2019; 1453:99-113. [DOI: 10.1111/nyas.14228] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 08/09/2019] [Accepted: 08/13/2019] [Indexed: 12/11/2022]
Affiliation(s)
- Piera Filippi
- Laboratoire Parole et Langage, LPL UMR 7309, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Institute of Language, Communication and the Brain, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Laboratoire de Psychologie Cognitive LPC UMR 7290, Centre National de la Recherche ScientifiqueAix‐Marseille Université Marseille France
| | - Marisa Hoeschele
- Acoustics Research InstituteAustrian Academy of Science Vienna Austria
- Department of Cognitive BiologyUniversity of Vienna Vienna Austria
| | | | - Daniel L. Bowling
- Department of Psychiatry and Behavioral SciencesStanford University School of Medicine Stanford California
| |
Collapse
|
46
|
Sievers B, Lee C, Haslett W, Wheatley T. A multi-sensory code for emotional arousal. Proc Biol Sci 2019; 286:20190513. [PMID: 31288695 DOI: 10.1098/rspb.2019.0513] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
People express emotion using their voice, face and movement, as well as through abstract forms as in art, architecture and music. The structure of these expressions often seems intuitively linked to its meaning: romantic poetry is written in flowery curlicues, while the logos of death metal bands use spiky script. Here, we show that these associations are universally understood because they are signalled using a multi-sensory code for emotional arousal. Specifically, variation in the central tendency of the frequency spectrum of a stimulus-its spectral centroid-is used by signal senders to express emotional arousal, and by signal receivers to make emotional arousal judgements. We show that this code is used across sounds, shapes, speech and human body movements, providing a strong multi-sensory signal that can be used to efficiently estimate an agent's level of emotional arousal.
Collapse
Affiliation(s)
- Beau Sievers
- 1 Department of Psychology, Harvard University , Cambridge, MA 02138 , USA
| | - Caitlyn Lee
- 2 Department of Psychological and Brain Sciences, Dartmouth College , Hanover, NH 03755 , USA
| | - William Haslett
- 3 Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth , Hanover, NH 03755 , USA
| | - Thalia Wheatley
- 2 Department of Psychological and Brain Sciences, Dartmouth College , Hanover, NH 03755 , USA
| |
Collapse
|
47
|
Lattenkamp EZ, Shields SM, Schutte M, Richter J, Linnenschmidt M, Vernes SC, Wiegrebe L. The Vocal Repertoire of Pale Spear-Nosed Bats in a Social Roosting Context. Front Ecol Evol 2019. [DOI: 10.3389/fevo.2019.00116] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
48
|
Cowen AS, Laukka P, Elfenbein HA, Liu R, Keltner D. The primacy of categories in the recognition of 12 emotions in speech prosody across two cultures. Nat Hum Behav 2019; 3:369-382. [PMID: 30971794 PMCID: PMC6687085 DOI: 10.1038/s41562-019-0533-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 01/15/2019] [Indexed: 12/30/2022]
Abstract
Central to emotion science is the degree to which categories, such as Awe, or broader affective features, such as Valence, underlie the recognition of emotional expression. To explore the processes by which people recognize emotion from prosody, US and Indian participants were asked to judge the emotion categories or affective features communicated by 2,519 speech samples produced by 100 actors from 5 cultures. With large-scale statistical inference methods, we find that prosody can communicate at least 12 distinct kinds of emotion that are preserved across the 2 cultures. Analyses of the semantic and acoustic structure of the recognition of emotions reveal that emotion categories drive the recognition of emotions more so than affective features, including Valence. In contrast to discrete emotion theories, however, emotion categories are bridged by gradients representing blends of emotions. Our findings, visualized within an interactive map, reveal a complex, high-dimensional space of emotional states recognized cross-culturally in speech prosody.
Collapse
Affiliation(s)
- Alan S Cowen
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA.
| | - Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | | - Runjing Liu
- Department of Statistics, University of California, Berkeley, Berkeley, CA, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
49
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
50
|
Friel M, Kunc HP, Griffin K, Asher L, Collins LM. Positive and negative contexts predict duration of pig vocalisations. Sci Rep 2019; 9:2062. [PMID: 30765788 PMCID: PMC6375976 DOI: 10.1038/s41598-019-38514-w] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 12/19/2018] [Indexed: 01/28/2023] Open
Abstract
Emotions are mental states occurring in response to external and internal stimuli and thus form an integral part of an animal’s behaviour. Emotions can be mapped in two dimensions based on their arousal and valence. Whilst good indicators of arousal exist, clear indicators of emotional valence, particularly positive valence, are still rare. However, positively valenced emotions may play a crucial role in social interactions in many species and thus, an understanding of how emotional valence is expressed is needed. Vocalisations are a potential indicator of emotional valence as they can reflect the internal state of the caller. We experimentally manipulated valence, using positive and negative cognitive bias trials, to quantify changes in pig vocalisations. We found that grunts were shorter in positive trials than in negative trials. Interestingly, we did not find differences in the other measured acoustic parameters between the positive and negative contexts as reported in previous studies. These differences in results suggest that acoustic parameters may differ in their sensitivity as indicators of emotial valence. However, it is important to understand how similar contexts are, in terms of their valence, to be able to fully understand how and when acoustic parameters reflect emotional states.
Collapse
Affiliation(s)
- Mary Friel
- School of Biological Sciences, Queen's University Belfast, Medical Biology Centre, Belfast, UK.,Faculty of Biological Sciences, University of Leeds, Leeds, UK
| | - Hansjoerg P Kunc
- School of Biological Sciences, Queen's University Belfast, Medical Biology Centre, Belfast, UK
| | - Kym Griffin
- School of Animal Rural & Environmental Sciences, Nottingham Trent University, Nottingham, UK
| | - Lucy Asher
- Centre for Behaviour and Evolution, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK
| | - Lisa M Collins
- Faculty of Biological Sciences, University of Leeds, Leeds, UK. .,School of Life Sciences, University of Lincoln, Lincoln, UK.
| |
Collapse
|