1
|
Zapata-Cardona J, Ceballos MC, Rodríguez BDJ. Music and Emotions in Non-Human Animals from Biological and Comparative Perspectives. Animals (Basel) 2024; 14:1491. [PMID: 38791707 PMCID: PMC11117248 DOI: 10.3390/ani14101491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 05/10/2024] [Accepted: 05/14/2024] [Indexed: 05/26/2024] Open
Abstract
The effects of sound stimulation as a sensorial environmental enrichment for captive animals have been studied. When appropriately implemented for farm animals, it can improve welfare, health, and productivity. Furthermore, there are indications that music can induce positive emotions in non-human animals, similar to humans. Emotion is a functional state of the organism involving both physiological processes, mediated by neuroendocrine regulation, and changes in behavior, affecting various aspects, including contextual perception and welfare. As there is very limited information on non-human animals, the objective of this review is to highlight what is known about these processes from human biological and comparative perspectives and stimulate future research on using music to improve animal welfare.
Collapse
Affiliation(s)
- Juliana Zapata-Cardona
- Grupo de Investigación Patobiología QUIRON, Escuela de Medicina Veterinaria, Universidad de Antioquia, Calle 70 No. 52-21, Medellín 50010, Colombia;
| | - Maria Camila Ceballos
- Faculty of Veterinary Medicine, University of Calgary, Clinical Skills Building, 11877-85th Street NW, Calgary, AB T3R 1J3, Canada
| | - Berardo de Jesús Rodríguez
- Grupo de Investigación Patobiología QUIRON, Escuela de Medicina Veterinaria, Universidad de Antioquia, Calle 70 No. 52-21, Medellín 50010, Colombia;
| |
Collapse
|
2
|
Kamiloğlu RG, Sauter DA. Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations. Cogn Emot 2024; 38:277-295. [PMID: 37997898 PMCID: PMC11057848 DOI: 10.1080/02699931.2023.2285854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Collapse
Affiliation(s)
- Roza G. Kamiloğlu
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
3
|
Signals and cues of social groups. Behav Brain Sci 2022; 45:e100. [PMID: 35796370 DOI: 10.1017/s0140525x21001461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A crucial factor in how we perceive social groups involves the signals and cues emitted by them. Groups signal various properties of their constitution through coordinated behaviors across sensory modalities, influencing receivers' judgments of the group and subsequent interactions. We argue that group communication is a necessary component of a comprehensive computational theory of social groups.
Collapse
|
4
|
Abstract
There is a lack of clarity on whether pigs can emotionally respond to musical stimulation and whether that response is related to music structure. Qualitative Behavioral Assessment (QBA) was used to evaluate effects of 16 distinct musical pieces (in terms of harmonic structure) on emotional responses in nursery pigs (n = 30) during four periods: "habituation", "treatments", "breaks" and "final". Data were evaluated using Principal component analysis (PCA). Two principal components (PC) were considered in the analysis: PC1, characterized as a positive emotions index, included the emotional responses content, playful, sociable, and happy, whereas PC2, characterized as a negative emotions index, included fearful, inquisitive, and uneasy with positive loadings, and relaxed and calm with negative loadings. Musical stimulation (treatment) increased (P < 0.01) both emotional indices, compared to other periods and this response was influenced by harmonic characteristics of the music. We concluded that pigs have a wide variety of emotional responses, with different affective states related to the music structure used, providing evidence of its potential use as environmental enrichment for this species.
Collapse
|
5
|
Abstract
We discuss approaches to the study of the evolution of music (sect. R1); challenges to each of the two theories of the origins of music presented in the companion target articles (sect. R2); future directions for testing them (sect. R3); and priorities for better understanding the nature of music (sect. R4).
Collapse
Affiliation(s)
- Samuel A Mehr
- Department of Psychology, Harvard University, Cambridge, MA02138, , https://, https://projects.iq.harvard.edu/epl
- Data Science Initiative, Harvard University, Cambridge, MA02138
- School of Psychology, Victoria University of Wellington, Wellington6012, New Zealand
| | - Max M Krasnow
- Department of Psychology, Harvard University, Cambridge, MA02138, , https://, https://projects.iq.harvard.edu/epl
| | - Gregory A Bryant
- Department of Communication, University of California Los Angeles, Los Angeles, CA90095, , http://gabryant.bol.ucla.edu
- Center for Behavior, Evolution, and Culture, University of California Los Angeles, Los Angeles, CA90095, USA
| | - Edward H Hagen
- Department of Anthropology, Washington State University, Vancouver, WA98686, USA. , https://anthro.vancouver.wsu.edu/people/hagen
| |
Collapse
|
6
|
Dell’Anna A, Leman M, Berti A. Musical Interaction Reveals Music as Embodied Language. Front Neurosci 2021; 15:667838. [PMID: 34335155 PMCID: PMC8317642 DOI: 10.3389/fnins.2021.667838] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
Life and social sciences often focus on the social nature of music (and language alike). In biology, for example, the three main evolutionary hypotheses about music (i.e., sexual selection, parent-infant bond, and group cohesion) stress its intrinsically social character (Honing et al., 2015). Neurobiology thereby has investigated the neuronal and hormonal underpinnings of musicality for more than two decades (Chanda and Levitin, 2013; Salimpoor et al., 2015; Mehr et al., 2019). In line with these approaches, the present paper aims to suggest that the proper way to capture the social interactive nature of music (and, before it, musicality), is to conceive of it as an embodied language, rooted in culturally adapted brain structures (Clarke et al., 2015; D'Ausilio et al., 2015). This proposal heeds Ian Cross' call for an investigation of music as an "interactive communicative process" rather than "a manifestation of patterns in sound" (Cross, 2014), with an emphasis on its embodied and predictive (coding) aspects (Clark, 2016; Leman, 2016; Koelsch et al., 2019). In the present paper our goal is: (i) to propose a framework of music as embodied language based on a review of the major concepts that define joint musical action, with a particular emphasis on embodied music cognition and predictive processing, along with some relevant neural underpinnings; (ii) to summarize three experiments conducted in our laboratories (and recently published), which provide evidence for, and can be interpreted according to, the new conceptual framework. In doing so, we draw on both cognitive musicology and neuroscience to outline a comprehensive framework of musical interaction, exploring several aspects of making music in dyads, from a very basic proto-musical action, like tapping, to more sophisticated contexts, like playing a jazz standard and singing a hocket melody. Our framework combines embodied and predictive features, revolving around the concept of joint agency (Pacherie, 2012; Keller et al., 2016; Bolt and Loehr, 2017). If social interaction is the "default mode" by which human brains communicate with their environment (Hari et al., 2015), music and musicality conceived of as an embodied language may arguably provide a route toward its navigation.
Collapse
Affiliation(s)
- Alessandro Dell’Anna
- Department of Art, Music, and Theatre Sciences, IPEM, Ghent University, Ghent, Belgium
- SAMBA Research Group, Department of Psychology, University of Turin, Turin, Italy
| | - Marc Leman
- Department of Art, Music, and Theatre Sciences, IPEM, Ghent University, Ghent, Belgium
| | - Annamaria Berti
- SAMBA Research Group, Department of Psychology, University of Turin, Turin, Italy
| |
Collapse
|
7
|
Bryant GA, Wang CS, Fusaroli R. Recognizing affiliation in colaughter and cospeech. ROYAL SOCIETY OPEN SCIENCE 2020; 7:201092. [PMID: 33204467 PMCID: PMC7657881 DOI: 10.1098/rsos.201092] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 09/01/2020] [Indexed: 06/11/2023]
Abstract
Theories of vocal signalling in humans typically only consider communication within the interactive group and ignore intergroup dynamics. Recent work has found that colaughter generated between pairs of people in conversation can afford accurate judgements of affiliation across widely disparate cultures, and the acoustic features that listeners use to make these judgements are linked to speaker arousal. But to what extent does colaughter inform third party listeners beyond other dynamic information between interlocutors such as overlapping talk? We presented listeners with short segments (1-3 s) of colaughter and simultaneous speech (i.e. cospeech) taken from natural conversations between established friends and newly acquainted strangers. Participants judged whether the pairs of interactants in the segments were friends or strangers. Colaughter afforded more accurate judgements of affiliation than did cospeech, despite cospeech being over twice in duration relative to colaughter on average. Sped-up versions of colaughter and cospeech (proxies of speaker arousal) did not improve accuracy for either identifying friends or strangers, but faster versions of both modes increased the likelihood of tokens being judged as being between friends. Overall, results are consistent with research showing that laughter is well suited to transmit rich information about social relationships to third party overhearers-a signal that works between, and not just within conversational groups.
Collapse
Affiliation(s)
- Gregory A. Bryant
- Department of Communication, University of California, Los Angeles, CA, USA
- Center for Behavior, Evolution, and Culture, University of California, Los Angeles, CA, USA
| | - Christine S. Wang
- Department of Communication, University of California, Los Angeles, CA, USA
| | - Riccardo Fusaroli
- Department of Communication and Culture, Aarhus UniversityDenmark
- Interacting Minds Center, Aarhus UniversityDenmark
| |
Collapse
|
8
|
Hofstetter RW, Copp BE, Lukic I. Acoustic noise of refrigerators promote increased growth rate of the gray mold
Botrytis cinerea. J Food Saf 2020. [DOI: 10.1111/jfs.12856] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Affiliation(s)
| | - Brennan E. Copp
- School of Forestry Northern Arizona University Flagstaff Arizona USA
- Division of Biological Sciences University of Missouri Columbia Missouri USA
| | - Ivan Lukic
- School of Forestry Northern Arizona University Flagstaff Arizona USA
| |
Collapse
|
9
|
Abstract
Music comprises a diverse category of cognitive phenomena that likely represent both the effects of psychological adaptations that are specific to music (e.g., rhythmic entrainment) and the effects of adaptations for non-musical functions (e.g., auditory scene analysis). How did music evolve? Here, we show that prevailing views on the evolution of music - that music is a byproduct of other evolved faculties, evolved for social bonding, or evolved to signal mate quality - are incomplete or wrong. We argue instead that music evolved as a credible signal in at least two contexts: coalitional interactions and infant care. Specifically, we propose that (1) the production and reception of coordinated, entrained rhythmic displays is a co-evolved system for credibly signaling coalition strength, size, and coordination ability; and (2) the production and reception of infant-directed song is a co-evolved system for credibly signaling parental attention to secondarily altricial infants. These proposals, supported by interdisciplinary evidence, suggest that basic features of music, such as melody and rhythm, result from adaptations in the proper domain of human music. The adaptations provide a foundation for the cultural evolution of music in its actual domain, yielding the diversity of musical forms and musical behaviors found worldwide.
Collapse
Affiliation(s)
- Samuel A Mehr
- Department of Psychology, Harvard University, Cambridge, MA02138, ; https://; https://projects.iq.harvard.edu/epl
- Data Science Initiative, Harvard University, Cambridge, MA02138
- School of Psychology, Victoria University of Wellington, Wellington6012, New Zealand
| | - Max M Krasnow
- Department of Psychology, Harvard University, Cambridge, MA02138, ; https://; https://projects.iq.harvard.edu/epl
| | - Gregory A Bryant
- Department of Communication, University of California Los Angeles, Los Angeles, CA90095, ; https://gabryant.bol.ucla.edu
- Center for Behavior, Evolution, & Culture, University of California Los Angeles, Los Angeles, CA90095
| | - Edward H Hagen
- Department of Anthropology, Washington State University, Vancouver, WA98686, USA. ; https://anthro.vancouver.wsu.edu/people/hagen
| |
Collapse
|
10
|
Filippi P. Emotional Voice Intonation: A Communication Code at the Origins of Speech Processing and Word-Meaning Associations? JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00337-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Abstract
The aim of the present work is to investigate the facilitating effect of vocal emotional intonation on the evolution of the following processes involved in language: (a) identifying and producing phonemes, (b) processing compositional rules underlying vocal utterances, and (c) associating vocal utterances with meanings. To this end, firstly, I examine research on the presence of these abilities in animals, and the biologically ancient nature of emotional vocalizations. Secondly, I review research attesting to the facilitating effect of emotional voice intonation on these abilities in humans. Thirdly, building on these studies in animals and humans, and through taking an evolutionary perspective, I provide insights for future empirical work on the facilitating effect of emotional intonation on these three processes in animals and preverbal humans. In this work, I highlight the importance of a comparative approach to investigate language evolution empirically. This review supports Darwin’s hypothesis, according to which the ability to express emotions through voice modulation was a key step in the evolution of spoken language.
Collapse
|
11
|
Reybrouck M, Podlipniak P, Welch D. Music Listening as Coping Behavior: From Reactive Response to Sense-Making. Behav Sci (Basel) 2020; 10:E119. [PMID: 32698450 PMCID: PMC7407588 DOI: 10.3390/bs10070119] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 07/10/2020] [Accepted: 07/14/2020] [Indexed: 11/22/2022] Open
Abstract
Coping is a survival mechanism of living organisms. It is not merely reactive, but also involves making sense of the environment by rendering sensory information into percepts that have meaning in the context of an organism's cognitions. Music listening, on the other hand, is a complex task that embraces sensory, physiological, behavioral, and cognitive levels of processing. Being both a dispositional process that relies on our evolutionary toolkit for coping with the world and a more elaborated skill for sense-making, it goes beyond primitive action-reaction couplings by the introduction of higher-order intermediary variables between sensory input and effector reactions. Consideration of music-listening from the perspective of coping treats music as a sound environment and listening as a process that involves exploration of this environment as well as interactions with the sounds. Several issues are considered in this regard such as the conception of music as a possible stressor, the role of adaptive listening, the relation between coping and reward, the importance of self-regulation strategies in the selection of music, and the instrumental meaning of music in the sense that it can be used to modify the internal and external environment of the listener.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, Faculty of Arts, KU Leuven-University of Leuven, 3000 Leuven, Belgium
- IPEM, Department of Art History, Musicology and Theatre Studies, 9000 Ghent, Belgium
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, 61–712 Poznań, Poland;
| | - David Welch
- Institute Audiology Section, School of Population Health, University of Auckland, 2011 Auckland, New Zealand;
| |
Collapse
|
12
|
Trevor C, Arnal LH, Frühholz S. Terrifying film music mimics alarming acoustic feature of human screams. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:EL540. [PMID: 32611175 DOI: 10.1121/10.0001459] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 06/02/2020] [Indexed: 06/11/2023]
Abstract
One way music is thought to convey emotion is by mimicking acoustic features of affective human vocalizations [Juslin and Laukka (2003). Psychol. Bull. 129(5), 770-814]. Regarding fear, it has been informally noted that music for scary scenes in films frequently exhibits a "scream-like" character. Here, this proposition is formally tested. This paper reports acoustic analyses for four categories of audio stimuli: screams, non-screaming vocalizations, scream-like music, and non-scream-like music. Valence and arousal ratings were also collected. Results support the hypothesis that a key feature of human screams (roughness) is imitated by scream-like music and could potentially signal danger through both music and the voice.
Collapse
Affiliation(s)
- Caitlyn Trevor
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, 8050 Zurich, Switzerland
| | - Luc H Arnal
- Department of Fundamental Neuroscience, University of Geneva, Biotech Campus, Geneva 7, CH-1202, , ,
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, 8050 Zurich, Switzerland
| |
Collapse
|
13
|
Huron D, Vuoskoski JK. On the Enjoyment of Sad Music: Pleasurable Compassion Theory and the Role of Trait Empathy. Front Psychol 2020; 11:1060. [PMID: 32547455 PMCID: PMC7270397 DOI: 10.3389/fpsyg.2020.01060] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 04/27/2020] [Indexed: 12/27/2022] Open
Abstract
Drawing on recent empirical studies on the enjoyment of nominally sad music, a general theory of the pleasure of tragic or sad portrayals is presented. Not all listeners enjoy sad music. Multiple studies indicate that those individuals who enjoy sad music exhibit a particular pattern of empathic traits. These individuals score high on empathic concern (compassion) and high on imaginative absorption (fantasy), with only nominal personal distress (commiseration). Empirical studies are reviewed implicating compassion as a positively valenced affect. Accordingly, individuals who most enjoy sad musical portrayals experience a pleasurable prosocial affect (compassion), amplified by empathetic engagement (fantasy), while experiencing only nominal levels of unpleasant emotional contagion (commiseration). It is suggested that this pattern of trait empathy may apply more broadly, accounting for many other situations where spectators experience pleasure when exposed to tragic representations or portrayals.
Collapse
Affiliation(s)
- David Huron
- Center for Cognitive and Brain Sciences & School of Music, The Ohio State University, Columbus, OH, United States
| | - Jonna K. Vuoskoski
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Musicology, University of Oslo, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
14
|
Liuni M, Ponsot E, Bryant GA, Aucouturier JJ. Sound context modulates perceived vocal emotion. Behav Processes 2020; 172:104042. [PMID: 31926279 DOI: 10.1016/j.beproc.2020.104042] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Revised: 01/04/2020] [Accepted: 01/07/2020] [Indexed: 11/25/2022]
Abstract
Many animal vocalizations contain nonlinear acoustic phenomena as a consequence of physiological arousal. In humans, nonlinear features are processed early in the auditory system, and are used to efficiently detect alarm calls and other urgent signals. Yet, high-level emotional and semantic contextual factors likely guide the perception and evaluation of roughness features in vocal sounds. Here we examined the relationship between perceived vocal arousal and auditory context. We presented listeners with nonverbal vocalizations (yells of a single vowel) at varying levels of portrayed vocal arousal, in two musical contexts (clean guitar, distorted guitar) and one non-musical context (modulated noise). As predicted, vocalizations with higher levels of portrayed vocal arousal were judged as more negative and more emotionally aroused than the same voices produced with low vocal arousal. Moreover, both the perceived valence and emotional arousal of vocalizations were significantly affected by both musical and non-musical contexts. These results show the importance of auditory context in judging emotional arousal and valence in voices and music, and suggest that nonlinear features in music are processed similarly to communicative vocal signals.
Collapse
Affiliation(s)
- Marco Liuni
- STMS Lab (IRCAM/CNRS/Sorbonne Universités), France.
| | - Emmanuel Ponsot
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, Université PSL, CNRS, Paris, France; STMS Lab (IRCAM/CNRS/Sorbonne Universités), France
| | - Gregory A Bryant
- UCLA Department of Communication, United States; UCLA Center for Behavior, Evolution, and Culture, United States
| | | |
Collapse
|
15
|
Filippi P, Hoeschele M, Spierings M, Bowling DL. Temporal modulation in speech, music, and animal vocal communication: evidence of conserved function. Ann N Y Acad Sci 2019; 1453:99-113. [DOI: 10.1111/nyas.14228] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 08/09/2019] [Accepted: 08/13/2019] [Indexed: 12/11/2022]
Affiliation(s)
- Piera Filippi
- Laboratoire Parole et Langage, LPL UMR 7309, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Institute of Language, Communication and the Brain, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Laboratoire de Psychologie Cognitive LPC UMR 7290, Centre National de la Recherche ScientifiqueAix‐Marseille Université Marseille France
| | - Marisa Hoeschele
- Acoustics Research InstituteAustrian Academy of Science Vienna Austria
- Department of Cognitive BiologyUniversity of Vienna Vienna Austria
| | | | - Daniel L. Bowling
- Department of Psychiatry and Behavioral SciencesStanford University School of Medicine Stanford California
| |
Collapse
|
16
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
17
|
Jégh-Czinege N, Faragó T, Pongrácz P. A bark of its own kind – the acoustics of ‘annoying’ dog barks suggests a specific attention-evoking effect for humans. BIOACOUSTICS 2019. [DOI: 10.1080/09524622.2019.1576147] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
| | - Tamás Faragó
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| | - Péter Pongrácz
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
18
|
Wood A, Niedenthal P. Developing a social functional account of laughter. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2018. [DOI: 10.1111/spc3.12383] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
19
|
Mehr SA, Singh M, York H, Glowacki L, Krasnow MM. Form and Function in Human Song. Curr Biol 2018; 28:356-368.e5. [PMID: 29395919 DOI: 10.1016/j.cub.2017.12.042] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Revised: 12/04/2017] [Accepted: 12/19/2017] [Indexed: 11/24/2022]
Abstract
Humans use music for a variety of social functions: we sing to accompany dance, to soothe babies, to heal illness, to communicate love, and so on. Across animal taxa, vocalization forms are shaped by their functions, including in humans. Here, we show that vocal music exhibits recurrent, distinct, and cross-culturally robust form-function relations that are detectable by listeners across the globe. In Experiment 1, internet users (n = 750) in 60 countries listened to brief excerpts of songs, rating each song's function on six dimensions (e.g., "used to soothe a baby"). Excerpts were drawn from a geographically stratified pseudorandom sample of dance songs, lullabies, healing songs, and love songs recorded in 86 mostly small-scale societies, including hunter-gatherers, pastoralists, and subsistence farmers. Experiment 1 and its analysis plan were pre-registered. Despite participants' unfamiliarity with the societies represented, the random sampling of each excerpt, their very short duration (14 s), and the enormous diversity of this music, the ratings demonstrated accurate and cross-culturally reliable inferences about song functions on the basis of song forms alone. In Experiment 2, internet users (n = 1,000) in the United States and India rated three contextual features (e.g., gender of singer) and seven musical features (e.g., melodic complexity) of each excerpt. The songs' contextual features were predictive of Experiment 1 function ratings, but musical features and the songs' actual functions explained unique variance in function ratings. These findings are consistent with the existence of universal links between form and function in vocal music.
Collapse
Affiliation(s)
- Samuel A Mehr
- Department of Psychology, Harvard University, 33 Kirkland St., Cambridge, MA 02138, USA; Data Science Initiative, Harvard University, 1350 Massachusetts Ave., Cambridge, MA 02138, USA; School of Psychology, Victoria University of Wellington, Kelburn Parade, Wellington 6012, New Zealand.
| | - Manvir Singh
- Department of Human Evolutionary Biology, Harvard University, Peabody Museum, 11 Divinity Ave., Cambridge, MA 02138, USA.
| | - Hunter York
- Department of Human Evolutionary Biology, Harvard University, Peabody Museum, 11 Divinity Ave., Cambridge, MA 02138, USA
| | - Luke Glowacki
- Institute for Advanced Study in Toulouse, 21 Allée de Brienne, 31015 Toulouse, France; Department of Anthropology, Pennsylvania State University, 410 Carpenter Building, University Park, PA 16802, USA
| | - Max M Krasnow
- Department of Psychology, Harvard University, 33 Kirkland St., Cambridge, MA 02138, USA
| |
Collapse
|
20
|
Filippi P, Gogoleva SS, Volodina EV, Volodin IA, de Boer B. Humans identify negative (but not positive) arousal in silver fox vocalizations: implications for the adaptive value of interspecific eavesdropping. Curr Zool 2017; 63:445-456. [PMID: 29492004 PMCID: PMC5804197 DOI: 10.1093/cz/zox035] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Accepted: 05/12/2017] [Indexed: 11/14/2022] Open
Abstract
The ability to identify emotional arousal in heterospecific vocalizations may facilitate behaviors that increase survival opportunities. Crucially, this ability may orient inter-species interactions, particularly between humans and other species. Research shows that humans identify emotional arousal in vocalizations across multiple species, such as cats, dogs, and piglets. However, no previous study has addressed humans’ ability to identify emotional arousal in silver foxes. Here, we adopted low- and high-arousal calls emitted by three strains of silver fox—Tame, Aggressive, and Unselected—in response to human approach. Tame and Aggressive foxes are genetically selected for friendly and attacking behaviors toward humans, respectively. Unselected foxes show aggressive and fearful behaviors toward humans. These three strains show similar levels of emotional arousal, but different levels of emotional valence in relation to humans. This emotional information is reflected in the acoustic features of the calls. Our data suggest that humans can identify high-arousal calls of Aggressive and Unselected foxes, but not of Tame foxes. Further analyses revealed that, although within each strain different acoustic parameters affect human accuracy in identifying high-arousal calls, spectral center of gravity, harmonic-to-noise ratio, and F0 best predict humans’ ability to discriminate high-arousal calls across all strains. Furthermore, we identified in spectral center of gravity and F0 the best predictors for humans’ absolute ratings of arousal in each call. Implications for research on the adaptive value of inter-specific eavesdropping are discussed.
Collapse
Affiliation(s)
- Piera Filippi
- Artificial Intelligence Laboratory, Department of Computer Science, Faculty of Science, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium.,Brain and Language Research Institute, Aix-Marseille University, Avenue Pasteur 5, 13604 Aix-en-Provence, France.,Max Planck Institute for Psycholinguistics, Department of Language and Cognition, Wundtlaan 1, 6525 XD, Nijmegen, The Netherlands
| | - Svetlana S Gogoleva
- Department of Vertebrate Zoology, Faculty of Biology, Lomonosov Moscow State University, Vorobievy Gory 1/12, 119991 Moscow, Russia
| | - Elena V Volodina
- Scientific Research Department, Moscow Zoo, B. Gruzinskaya 1, 123242 Moscow, Russia
| | - Ilya A Volodin
- Department of Vertebrate Zoology, Faculty of Biology, Lomonosov Moscow State University, Vorobievy Gory 1/12, 119991 Moscow, Russia.,Scientific Research Department, Moscow Zoo, B. Gruzinskaya 1, 123242 Moscow, Russia
| | - Bart de Boer
- Artificial Intelligence Laboratory, Department of Computer Science, Faculty of Science, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
| |
Collapse
|
21
|
Reybrouck M, Eerola T. Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions. Front Psychol 2017; 8:494. [PMID: 28421015 PMCID: PMC5378764 DOI: 10.3389/fpsyg.2017.00494] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Accepted: 03/16/2017] [Indexed: 01/27/2023] Open
Abstract
The aim of this contribution is to broaden the concept of musical meaning from an abstract and emotionally neutral cognitive representation to an emotion-integrating description that is related to the evolutionary approach to music. Starting from the dispositional machinery for dealing with music as a temporal and sounding phenomenon, musical emotions are considered as adaptive responses to be aroused in human beings as the product of neural structures that are specialized for their processing. A theoretical and empirical background is provided in order to bring together the findings of music and emotion studies and the evolutionary approach to musical meaning. The theoretical grounding elaborates on the transition from referential to affective semantics, the distinction between expression and induction of emotions, and the tension between discrete-digital and analog-continuous processing of the sounds. The empirical background provides evidence from several findings such as infant-directed speech, referential emotive vocalizations and separation calls in lower mammals, the distinction between the acoustic and vehicle mode of sound perception, and the bodily and physiological reactions to the sounds. It is argued, finally, that early affective processing reflects the way emotions make our bodies feel, which in turn reflects on the emotions expressed and decoded. As such there is a dynamic tension between nature and nurture, which is reflected in the nature-nurture-nature cycle of musical sense-making.
Collapse
Affiliation(s)
- Mark Reybrouck
- Faculty of Arts, Musicology Research Group, KU Leuven - University of LeuvenLeuven, Belgium
| | | |
Collapse
|
22
|
Schirmer A, Adolphs R. Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence. Trends Cogn Sci 2017; 21:216-228. [PMID: 28173998 DOI: 10.1016/j.tics.2017.01.001] [Citation(s) in RCA: 140] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 12/23/2016] [Accepted: 01/03/2017] [Indexed: 11/30/2022]
Abstract
Historically, research on emotion perception has focused on facial expressions, and findings from this modality have come to dominate our thinking about other modalities. Here we examine emotion perception through a wider lens by comparing facial with vocal and tactile processing. We review stimulus characteristics and ensuing behavioral and brain responses and show that audition and touch do not simply duplicate visual mechanisms. Each modality provides a distinct input channel and engages partly nonoverlapping neuroanatomical systems with different processing specializations (e.g., specific emotions versus affect). Moreover, processing of signals across the different modalities converges, first into multi- and later into amodal representations that enable holistic emotion judgments.
Collapse
Affiliation(s)
- Annett Schirmer
- Chinese University of Hong Kong, Hong Kong; Max Planck Institute for Human Cognitive and Brain Sciences, Germany; National University of Singapore, Singapore.
| | - Ralph Adolphs
- California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
23
|
Roberts SGB, Roberts AI. Social Brain Hypothesis: Vocal and Gesture Networks of Wild Chimpanzees. Front Psychol 2016; 7:1756. [PMID: 27933005 PMCID: PMC5121241 DOI: 10.3389/fpsyg.2016.01756] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2016] [Accepted: 10/25/2016] [Indexed: 12/28/2022] Open
Abstract
A key driver of brain evolution in primates and humans is the cognitive demands arising from managing social relationships. In primates, grooming plays a key role in maintaining these relationships, but the time that can be devoted to grooming is inherently limited. Communication may act as an additional, more time-efficient bonding mechanism to grooming, but how patterns of communication are related to patterns of sociality is still poorly understood. We used social network analysis to examine the associations between close proximity (duration of time spent within 10 m per hour spent in the same party), grooming, vocal communication, and gestural communication (duration of time and frequency of behavior per hour spent within 10 m) in wild chimpanzees. This study examined hypotheses formulated a priori and the results were not corrected for multiple testing. Chimpanzees had differentiated social relationships, with focal chimpanzees maintaining some level of proximity to almost all group members, but directing gestures at and grooming with a smaller number of preferred social partners. Pairs of chimpanzees that had high levels of close proximity had higher rates of grooming. Importantly, higher rates of gestural communication were also positively associated with levels of proximity, and specifically gestures associated with affiliation (greeting, gesture to mutually groom) were related to proximity. Synchronized low-intensity pant-hoots were also positively related to proximity in pairs of chimpanzees. Further, there were differences in the size of individual chimpanzees' proximity networks—the number of social relationships they maintained with others. Focal chimpanzees with larger proximity networks had a higher rate of both synchronized low- intensity pant-hoots and synchronized high-intensity pant-hoots. These results suggest that in addition to grooming, both gestures and synchronized vocalizations may play key roles in allowing chimpanzees to manage a large and differentiated set of social relationships. Gestures may be important in reducing the aggression arising from being in close proximity to others, allowing for proximity to be maintained for longer and facilitating grooming. Vocalizations may allow chimpanzees to communicate with a larger number of recipients than gestures and the synchronized nature of the pant-hoot calls may facilitate social bonding of more numerous social relationships. As group sizes increased through human evolution, both gestures and synchronized vocalizations may have played important roles in bonding social relationships in a more time-efficient manner than grooming.
Collapse
Affiliation(s)
| | - Anna I Roberts
- Department of Psychology, University of Chester Chester, UK
| |
Collapse
|
24
|
Richter J, Ostovar R. "It Don't Mean a Thing if It Ain't Got that Swing"- an Alternative Concept for Understanding the Evolution of Dance and Music in Human Beings. Front Hum Neurosci 2016; 10:485. [PMID: 27774058 PMCID: PMC5054692 DOI: 10.3389/fnhum.2016.00485] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2015] [Accepted: 09/13/2016] [Indexed: 12/28/2022] Open
Abstract
The functions of dance and music in human evolution are a mystery. Current research on the evolution of music has mainly focused on its melodic attribute which would have evolved alongside (proto-)language. Instead, we propose an alternative conceptual framework which focuses on the co-evolution of rhythm and dance (R&D) as intertwined aspects of a multimodal phenomenon characterized by the unity of action and perception. Reviewing the current literature from this viewpoint we propose the hypothesis that R&D have co-evolved long before other musical attributes and (proto-)language. Our view is supported by increasing experimental evidence particularly in infants and children: beat is perceived and anticipated already by newborns and rhythm perception depends on body movement. Infants and toddlers spontaneously move to a rhythm irrespective of their cultural background. The impulse to dance may have been prepared by the susceptibility of infants to be soothed by rocking. Conceivable evolutionary functions of R&D include sexual attraction and transmission of mating signals. Social functions include bonding, synchronization of many individuals, appeasement of hostile individuals, and pre- and extra-verbal communication enabling embodied individual and collective memorizing. In many cultures R&D are used for entering trance, a base for shamanism and early religions. Individual benefits of R&D include improvement of body coordination, as well as painkilling, anti-depressive, and anti-boredom effects. Rhythm most likely paved the way for human speech as supported by studies confirming the overlaps between cognitive and neural resources recruited for language and rhythm. In addition, dance encompasses visual and gestural communication. In future studies attention should be paid to which attribute of music is focused on and that the close mutual relation between R&D is taken into account. The possible evolutionary functions of dance deserve more attention.
Collapse
Affiliation(s)
- Joachim Richter
- Institute of Tropical Medicine and International Health, Charité UniversitätsmedizinBerlin, Germany
| | | |
Collapse
|
25
|
Filippi P. Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Front Psychol 2016; 7:1393. [PMID: 27733835 PMCID: PMC5039945 DOI: 10.3389/fpsyg.2016.01393] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/31/2016] [Indexed: 01/29/2023] Open
Abstract
Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
Collapse
Affiliation(s)
- Piera Filippi
- Department of Artificial Intelligence, Vrije Universiteit BrusselBrussels, Belgium
| |
Collapse
|
26
|
Roberts AI, Roberts SGB. Gestural Communication and Mating Tactics in Wild Chimpanzees. PLoS One 2015; 10:e0139683. [PMID: 26536467 PMCID: PMC4633128 DOI: 10.1371/journal.pone.0139683] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Accepted: 09/16/2015] [Indexed: 11/18/2022] Open
Abstract
The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii) was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away), chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller) chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees.
Collapse
Affiliation(s)
- Anna Ilona Roberts
- Department of Psychology, University of Chester, Chester, Parkgate Road, Chester, United Kingdom
- * E-mail:
| | | |
Collapse
|
27
|
Abstract
Ackermann et al. briefly point out the potential significance of coordinated vocal behavior in the dual pathway model of acoustic communication. Rhythmically entrained and articulated pre-linguistic vocal activity in early hominins might have set the evolutionary stage for later refinements that manifest in modern humans as language-based conversational turn-taking, joint music-making, and other behaviors associated with prosociality.
Collapse
|
28
|
Bhatara A, Laukka P, Levitin DJ. Expression of emotion in music and vocal communication: Introduction to the research topic. Front Psychol 2014; 5:399. [PMID: 24829557 PMCID: PMC4017128 DOI: 10.3389/fpsyg.2014.00399] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 04/15/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Anjali Bhatara
- Sorbonne Paris Cité, Université Paris Descartes Paris, France ; Laboratoire Psychologie de la Perception, CNRS, UMR 8242 Paris, France
| | - Petri Laukka
- Department of Psychology, Stockholm University Stockholm, Sweden
| | - Daniel J Levitin
- Department of Psychology, McGill University Montreal, QC, Canada
| |
Collapse
|