1
|
Cowen AS, Brooks JA, Prasad G, Tanaka M, Kamitani Y, Kirilyuk V, Somandepalli K, Jou B, Schroff F, Adam H, Sauter D, Fang X, Manokara K, Tzirakis P, Oh M, Keltner D. How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan. Front Psychol 2024; 15:1350631. [PMID: 38966733 PMCID: PMC11223574 DOI: 10.3389/fpsyg.2024.1350631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/04/2024] [Indexed: 07/06/2024] Open
Abstract
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Collapse
Affiliation(s)
- Alan S. Cowen
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Jeffrey A. Brooks
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | | | - Misato Tanaka
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yukiyasu Kamitani
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Krishna Somandepalli
- Google Research, Mountain View, CA, United States
- Department of Electrical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brendan Jou
- Google Research, Mountain View, CA, United States
| | | | - Hartwig Adam
- Google Research, Mountain View, CA, United States
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xia Fang
- Zhejiang University, Zhejiang, China
| | - Kunalan Manokara
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Moses Oh
- Hume AI, New York, NY, United States
| | - Dacher Keltner
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
2
|
Ferrari C, Arioli M, Atias D, Merabet LB, Cattaneo Z. Perception and discrimination of real-life emotional vocalizations in early blind individuals. Front Psychol 2024; 15:1386676. [PMID: 38784630 PMCID: PMC11112099 DOI: 10.3389/fpsyg.2024.1386676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/16/2024] [Indexed: 05/25/2024] Open
Abstract
Introduction The capacity to understand others' emotions and react accordingly is a key social ability. However, it may be compromised in case of a profound sensory loss that limits the contribution of available contextual cues (e.g., facial expression, gestures, body posture) to interpret emotions expressed by others. In this study, we specifically investigated whether early blindness affects the capacity to interpret emotional vocalizations, whose valence may be difficult to recognize without a meaningful context. Methods We asked a group of early blind (N = 22) and sighted controls (N = 22) to evaluate the valence and the intensity of spontaneous fearful and joyful non-verbal vocalizations. Results Our data showed that emotional vocalizations presented alone (i.e., with no contextual information) are similarly ambiguous for blind and sighted individuals but are perceived as more intense by the former possibly reflecting their higher saliency when visual experience is unavailable. Disussion Our study contributes to a better understanding of how sensory experience shapes ememotion recognition.
Collapse
Affiliation(s)
- Chiara Ferrari
- Department of Humanities, University of Pavia, Pavia, Italy
- IRCCS Mondino Foundation, Pavia, Italy
| | - Maria Arioli
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| | - Doron Atias
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Lotfi B. Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States
| | - Zaira Cattaneo
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| |
Collapse
|
3
|
Trevizan-Baú P, Stanić D, Furuya WI, Dhingra RR, Dutschmann M. Neuroanatomical frameworks for volitional control of breathing and orofacial behaviors. Respir Physiol Neurobiol 2024; 323:104227. [PMID: 38295924 DOI: 10.1016/j.resp.2024.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 01/22/2024] [Accepted: 01/25/2024] [Indexed: 02/16/2024]
Abstract
Breathing is the only vital function that can be volitionally controlled. However, a detailed understanding how volitional (cortical) motor commands can transform vital breathing activity into adaptive breathing patterns that accommodate orofacial behaviors such as swallowing, vocalization or sniffing remains to be developed. Recent neuroanatomical tract tracing studies have identified patterns and origins of descending forebrain projections that target brain nuclei involved in laryngeal adductor function which is critically involved in orofacial behavior. These nuclei include the midbrain periaqueductal gray and nuclei of the respiratory rhythm and pattern generating network in the brainstem, specifically including the pontine Kölliker-Fuse nucleus and the pre-Bötzinger complex in the medulla oblongata. This review discusses the functional implications of the forebrain-brainstem anatomical connectivity that could underlie the volitional control and coordination of orofacial behaviors with breathing.
Collapse
Affiliation(s)
- Pedro Trevizan-Baú
- The Florey Institute, University of Melbourne, Victoria, Australia; Department of Physiological Sciences, University of Florida, Gainesville, FL, USA
| | - Davor Stanić
- The Florey Institute, University of Melbourne, Victoria, Australia
| | - Werner I Furuya
- The Florey Institute, University of Melbourne, Victoria, Australia
| | - Rishi R Dhingra
- The Florey Institute, University of Melbourne, Victoria, Australia; Division of Pulmonary, Critical Care and Sleep Medicine, Case Western Reserve University, Cleveland, OH, USA
| | - Mathias Dutschmann
- The Florey Institute, University of Melbourne, Victoria, Australia; Division of Pulmonary, Critical Care and Sleep Medicine, Case Western Reserve University, Cleveland, OH, USA.
| |
Collapse
|
4
|
Kamiloğlu RG, Sauter DA. Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations. Cogn Emot 2024; 38:277-295. [PMID: 37997898 PMCID: PMC11057848 DOI: 10.1080/02699931.2023.2285854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Collapse
Affiliation(s)
- Roza G. Kamiloğlu
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
5
|
Lettieri G, Handjaras G, Cappello EM, Setti F, Bottari D, Bruno V, Diano M, Leo A, Tinti C, Garbarini F, Pietrini P, Ricciardi E, Cecchetti L. Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain. SCIENCE ADVANCES 2024; 10:eadk6840. [PMID: 38457501 PMCID: PMC10923499 DOI: 10.1126/sciadv.adk6840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/10/2024]
Abstract
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality affects how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
Collapse
Affiliation(s)
- Giada Lettieri
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giacomo Handjaras
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Elisa M. Cappello
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Francesca Setti
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Davide Bottari
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Andrea Leo
- Department of of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Pietro Pietrini
- Forensic Neuroscience and Psychiatry Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Emiliano Ricciardi
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Luca Cecchetti
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
6
|
Mortillaro M, Schlegel K. Embracing the Emotion in Emotional Intelligence Measurement: Insights from Emotion Theory and Research. J Intell 2023; 11:210. [PMID: 37998709 PMCID: PMC10672494 DOI: 10.3390/jintelligence11110210] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 10/16/2023] [Accepted: 10/28/2023] [Indexed: 11/25/2023] Open
Abstract
Emotional intelligence (EI) has gained significant popularity as a scientific construct over the past three decades, yet its conceptualization and measurement still face limitations. Applied EI research often overlooks its components, treating it as a global characteristic, and there are few widely used performance-based tests for assessing ability EI. The present paper proposes avenues for advancing ability EI measurement by connecting the main EI components to models and theories from the emotion science literature and related fields. For emotion understanding and emotion recognition, we discuss the implications of basic emotion theory, dimensional models, and appraisal models of emotion for creating stimuli, scenarios, and response options. For the regulation and management of one's own and others' emotions, we discuss how the process model of emotion regulation and its extensions to interpersonal processes can inform the creation of situational judgment items. In addition, we emphasize the importance of incorporating context, cross-cultural variability, and attentional and motivational factors into future models and measures of ability EI. We hope this article will foster exchange among scholars in the fields of ability EI, basic emotion science, social cognition, and emotion regulation, leading to an enhanced understanding of the individual differences in successful emotional functioning and communication.
Collapse
Affiliation(s)
- Marcello Mortillaro
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| | - Katja Schlegel
- Institute of Psychology, University of Bern, 3012 Bern, Switzerland
| |
Collapse
|
7
|
Ziereis A, Schacht A. Motivated attention and task relevance in the processing of cross-modally associated faces: Behavioral and electrophysiological evidence. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:1244-1266. [PMID: 37353712 PMCID: PMC10545602 DOI: 10.3758/s13415-023-01112-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/09/2023] [Indexed: 06/25/2023]
Abstract
It has repeatedly been shown that visually presented stimuli can gain additional relevance by their association with affective stimuli. Studies have shown effects of associated affect in event-related potentials (ERP) like the early posterior negativity (EPN), late positive complex (LPC), and even earlier components as the P1 or N170. However, findings are mixed as to the extent associated affect requires directed attention to the emotional quality of a stimulus and which ERP components are sensitive to task instructions during retrieval. In this preregistered study ( https://osf.io/ts4pb ), we tested cross-modal associations of vocal affect-bursts (positive, negative, neutral) to faces displaying neutral expressions in a flash-card-like learning task, in which participants studied face-voice pairs and learned to correctly assign them to each other. In the subsequent EEG test session, we applied both an implicit ("old-new") and explicit ("valence-classification") task to investigate whether the behavior at retrieval and neurophysiological activation of the affect-based associations were dependent on the type of motivated attention. We collected behavioral and neurophysiological data from 40 participants who reached the preregistered learning criterium. Results showed EPN effects of associated negative valence after learning and independent of the task. In contrast, modulations of later stages (LPC) by positive and negative associated valence were restricted to the explicit, i.e., valence-classification, task. These findings highlight the importance of the task at different processing stages and show that cross-modal affect can successfully be associated to faces.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, Goßlerstraße 14, 37073 Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, Goßlerstraße 14, 37073 Göttingen, Germany
| |
Collapse
|
8
|
Paletz SBF, Golonka EM, Pandža NB, Stanton G, Ryan D, Adams N, Rytting CA, Murauskaite EE, Buntain C, Johns MA, Bradley P. Social media emotions annotation guide (SMEmo): Development and initial validity. Behav Res Methods 2023:10.3758/s13428-023-02195-1. [PMID: 37697206 DOI: 10.3758/s13428-023-02195-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/10/2023] [Indexed: 09/13/2023]
Abstract
The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts' content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.
Collapse
Affiliation(s)
- Susannah B F Paletz
- College of Information Studies, University of Maryland, College Park, MD, USA.
| | - Ewa M Golonka
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - Nick B Pandža
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
- Program in Second Language Acquisition, University of Maryland, College Park, MD, USA
| | - Grace Stanton
- Department of Criminology, University of Maryland, College Park, MD, USA
| | - David Ryan
- Feminist, Gender, and Sexuality Studies, Stanford University, Stanford, CA, USA
| | - Nikki Adams
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - C Anton Rytting
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | | | - Cody Buntain
- College of Information Studies, University of Maryland, College Park, MD, USA
| | - Michael A Johns
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - Petra Bradley
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| |
Collapse
|
9
|
Mazza A, Ciorli T, Mirlisenna I, D'Onofrio I, Mantellino S, Zaccaria M, Pia L, Dal Monte O. Pain perception and physiological responses are modulated by active support from a romantic partner. Psychophysiology 2023; 60:e14299. [PMID: 36961121 DOI: 10.1111/psyp.14299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 01/24/2023] [Accepted: 03/08/2023] [Indexed: 03/25/2023]
Abstract
As social animals, humans are strongly affected by social bonds and interpersonal interactions. Proximity and social support from significant others may buffer the negative outcomes of a painful experience. Several studies have investigated the role of romantic partners' support in pain modulation, mostly focusing on tactile support and showing its effectiveness in reducing pain perception. Nevertheless, no study so far has investigated the role of supportive speaking on pain modulation, nor has compared the effects of a tactile and vocal support within the same couples. The present study directly compared for the first time the efficacy of mere presence (Passive Support) and different forms of active (Touch, Voice, Touch + Voice) support from a romantic partner during a painful experience in a naturalistic setting. We assessed pain modulation in 37 romantic couples via both subjective (self-reported ratings) and physiological (skin conductance) measurements. We found that all three types of active support were equally more effective than passive support in reducing the painful experience at both subjective and physiological levels; interestingly, our results suggest that supportive speaking can reduce pain perception with respect to passive support to a similar extent as tactile support does. Overall, this study highlights the relevance of an active support in reducing pain perception, with active types of support being more effective than passive support, regardless of its specific modality.
Collapse
Affiliation(s)
| | - Tommaso Ciorli
- Department of Psychology, University of Turin, Torino, Italy
| | | | | | | | | | - Lorenzo Pia
- Department of Psychology, University of Turin, Torino, Italy
| | - Olga Dal Monte
- Department of Psychology, University of Turin, Torino, Italy
- Department of Psychology, Yale University, New Haven, Connecticut, 06520, USA
| |
Collapse
|
10
|
Abstract
How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes-shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning-that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
Collapse
Affiliation(s)
- Maria Monroy
- Maria Monroy, Department of Psychology,
University of California Berkeley
| | | |
Collapse
|
11
|
Barca L, Candidi M, Lancia GL, Maglianella V, Pezzulo G. Mapping the mental space of emotional concepts through kinematic measures of decision uncertainty. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210367. [PMID: 36571117 PMCID: PMC9791479 DOI: 10.1098/rstb.2021.0367] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 08/09/2022] [Indexed: 12/27/2022] Open
Abstract
Emotional concepts and their mental representations have been extensively studied. Yet, some ecologically relevant aspects, such as how they are processed in ambiguous contexts (e.g., in relation to other emotional stimuli that share similar characteristics), are incompletely known. We employed a similarity judgement of emotional concepts and manipulated the contextual congruency of the responses along the two main affective dimensions of hedonic valence and physiological activation, respectively. Behavioural and kinematics (mouse-tracking) measures were combined to gather a novel 'similarity index' between emotional concepts, to derive topographical maps of their mental representations. Self-report (interoceptive sensibility, positive-negative affectivity, depression) and physiological measures (heart rate variability, HRV) have been collected to explore their possible association with emotional conceptual representation. Results indicate that emotional concepts typically associated with low arousal profit by contextual congruency, with faster responses and reduced uncertainty when contextual ambiguity decreases. The emotional maps recreate two almost orthogonal axes of valence and arousal, and the similarity measure captures the smooth boundaries between emotions. The emotional map of a subgroup of individuals with low positive affectivity reveals a narrower conceptual distribution, with variations in positive emotions and in individuals with reduced arousal (such as those with reduced HRV). Our work introduces a novel methodology to study emotional conceptual representations, bringing the behavioural dynamics of decision-making processes and choice uncertainty into the affective domain. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.
Collapse
Affiliation(s)
- Laura Barca
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185 Rome, Italy
| | - Matteo Candidi
- Department of Psychology, University of Rome ‘La Sapienza’, 00185 Rome, Italy
| | - Gian Luca Lancia
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185 Rome, Italy
| | - Valerio Maglianella
- Department of Psychology, University of Rome ‘La Sapienza’, 00185 Rome, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185 Rome, Italy
| |
Collapse
|
12
|
Liu J, Huo Y, Wang J, Bai Y, Zhao M, Di M. Awe of nature and well-being: Roles of nature connectedness and powerlessness. PERSONALITY AND INDIVIDUAL DIFFERENCES 2023. [DOI: 10.1016/j.paid.2022.111946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
13
|
Brooks JA, Tzirakis P, Baird A, Kim L, Opara M, Fang X, Keltner D, Monroy M, Corona R, Metrick J, Cowen AS. Deep learning reveals what vocal bursts express in different cultures. Nat Hum Behav 2023; 7:240-250. [PMID: 36577898 DOI: 10.1038/s41562-022-01489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 10/26/2022] [Indexed: 12/29/2022]
Abstract
Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called 'vocal bursts'. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY, USA
| | | | - Xia Fang
- Zhejiang University, Hangzhou, China
| | - Dacher Keltner
- Research Division, Hume AI, New York, NY, USA.,University of California, Berkeley, Berkeley, CA, USA
| | - Maria Monroy
- University of California, Berkeley, Berkeley, CA, USA
| | | | | | - Alan S Cowen
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
14
|
Emotional contagion in online groups as a function of valence and status. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
15
|
Barrett LF. Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. AMERICAN PSYCHOLOGIST 2022; 77:894-920. [PMID: 36409120 PMCID: PMC9683522 DOI: 10.1037/amp0001054] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
This article considers the status and study of "context" in psychological science through the lens of research on emotional expressions. The article begins by updating three well-trod methodological debates on the role of context in emotional expressions to reconsider several fundamental assumptions lurking within the field's dominant methodological tradition: namely, that certain expressive movements have biologically prepared, inherent emotional meanings that issue from singular, universal processes which are independent of but interact with contextual influences. The second part of this article considers the scientific opportunities that await if we set aside this traditional understanding of "context" as a moderator of signals with inherent psychological meaning and instead consider the possibility that psychological events emerge in ecosystems of signal ensembles, such that the psychological meaning of any individual signal is entirely relational. Such a fundamental shift has radical implications not only for the science of emotion but for psychological science more generally. It offers opportunities to improve the validity and trustworthiness of psychological science beyond what can be achieved with improvements to methodological rigor alone. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
16
|
Grollero D, Petrolini V, Viola M, Morese R, Lettieri G, Cecchetti L. The structure underlying core affect and perceived affective qualities of human vocal bursts. Cogn Emot 2022; 37:1-17. [PMID: 36300588 DOI: 10.1080/02699931.2022.2139661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Vocal bursts are non-linguistic affectively-laden sounds with a crucial function in human communication, yet their affective structure is still debated. Studies showed that ratings of valence and arousal follow a V-shaped relationship in several kinds of stimuli: high arousal ratings are more likely to go on a par with very negative or very positive valence. Across two studies, we asked participants to listen to 1,008 vocal bursts and judge both how they felt when listening to the sound (i.e. core affect condition), and how the speaker felt when producing it (i.e. perception of affective quality condition). We show that a V-shaped fit outperforms a linear model in explaining the valence-arousal relationship across conditions and studies, even after equating the number of exemplars across emotion categories. Also, although subjective experience can be significantly predicted using affective quality ratings, core affect scores are significantly lower in arousal, less extreme in valence, more variable between individuals, and less reproducible between studies. Nonetheless, stimuli rated with opposite valence between conditions range from 11% (study 1) to 17% (study 2). Lastly, we demonstrate that ambiguity in valence (i.e. high between-participants variability) explains violations of the V-shape and relates to higher arousal.
Collapse
Affiliation(s)
- Demetrio Grollero
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Valentina Petrolini
- Lindy Lab - Language in Neurodiversity, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Spain
| | - Marco Viola
- Department of Philosophy and Education, University of Turin, Turin, Italy
| | - Rosalba Morese
- Faculty of Communication, Culture and Society, Università della Svizzera Italiana, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| | - Giada Lettieri
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Crossmodal Perception and Plasticity Laboratory, IPSY, University of Louvain, Louvain-la-Neuve, Belgium
| | - Luca Cecchetti
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
17
|
Wood A, Sievert S, Martin J. Semantic Similarity of Social Functional Smiles and Laughter. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00405-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
18
|
Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals. PLoS One 2022; 17:e0261354. [PMID: 34995305 PMCID: PMC8740977 DOI: 10.1371/journal.pone.0261354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/29/2021] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Collapse
|
19
|
Bryant GA. Vocal communication across cultures: theoretical and methodological issues. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200387. [PMID: 34775828 PMCID: PMC8591381 DOI: 10.1098/rstb.2020.0387] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 08/03/2021] [Indexed: 11/12/2022] Open
Abstract
The study of human vocal communication has been conducted primarily in Western, educated, industrialized, rich, democratic (WEIRD) societies. Recently, cross-cultural investigations in several domains of voice research have been expanding into more diverse populations. Theoretically, it is important to understand how universals and cultural variations interact in vocal production and perception, but cross-cultural voice research presents many methodological challenges. Experimental methods typically used in WEIRD societies are often not possible to implement in many populations such as rural, small-scale societies. Moreover, theoretical and methodological issues are often unnecessarily intertwined. Here, I focus on three areas of cross-cultural voice modulation research: (i) vocal signalling of formidability and dominance, (ii) vocal emotions, and (iii) production and perception of infant-directed speech. Research in these specific areas illustrates challenges that apply more generally across the human behavioural sciences but also reveals promise as we develop our understanding of the evolution of human communication. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Gregory A. Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA 90095-1563, USA
| |
Collapse
|
20
|
Superior Communication of Positive Emotions Through Nonverbal Vocalisations Compared to Speech Prosody. JOURNAL OF NONVERBAL BEHAVIOR 2021; 45:419-454. [PMID: 34744232 PMCID: PMC8553689 DOI: 10.1007/s10919-021-00375-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/22/2021] [Indexed: 11/29/2022]
Abstract
The human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.
Collapse
|
21
|
Neves L, Martins M, Correia AI, Castro SL, Lima CF. Associations between vocal emotion recognition and socio-emotional adjustment in children. ROYAL SOCIETY OPEN SCIENCE 2021; 8:211412. [PMID: 34804582 PMCID: PMC8595998 DOI: 10.1098/rsos.211412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/20/2021] [Indexed: 06/13/2023]
Abstract
The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.
Collapse
Affiliation(s)
- Leonor Neves
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Marta Martins
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Ana Isabel Correia
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - São Luís Castro
- Centro de Psicologia da Universidade do Porto (CPUP), Faculdade de Psicologia e de Ciências da Educação da Universidade do Porto (FPCEUP), Porto, Portugal
| | - César F. Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
22
|
Investigating individual differences in emotion recognition ability using the ERAM test. Acta Psychol (Amst) 2021; 220:103422. [PMID: 34592586 DOI: 10.1016/j.actpsy.2021.103422] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 09/22/2021] [Accepted: 09/23/2021] [Indexed: 12/14/2022] Open
Abstract
Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research.
Collapse
|
23
|
Do People Agree on How Positive Emotions Are Expressed? A Survey of Four Emotions and Five Modalities Across 11 Cultures. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00376-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractWhile much is known about how negative emotions are expressed in different modalities, our understanding of the nonverbal expressions of positive emotions remains limited. In the present research, we draw upon disparate lines of theoretical and empirical work on positive emotions, and systematically examine which channels are thought to be used for expressing four positive emotions: feeling moved, gratitude, interest, and triumph. Employing the intersubjective approach, an established method in cross-cultural psychology, we first explored how the four positive emotions were reported to be expressed in two North American community samples (Studies 1a and 1b: n = 1466). We next confirmed the cross-cultural generalizability of our findings by surveying respondents from ten countries that diverged on cultural values (Study 2: n = 1826). Feeling moved was thought to be signaled with facial expressions, gratitude with the use of words, interest with words, face and voice, and triumph with body posture, vocal cues, facial expressions, and words. These findings provide cross-culturally consistent findings of differential expressions across positive emotions. Notably, positive emotions were thought to be expressed via modalities that go beyond the face.
Collapse
|
24
|
Farley SD. Introduction to the Special Issue on Emotional Expression Beyond the Face: On the Importance of Multiple Channels of Communication and Context. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00377-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
25
|
Charbonneau I, Guérette J, Cormier S, Blais C, Lalonde-Beaudoin G, Smith FW, Fiset D. The role of spatial frequencies for facial pain categorization. Sci Rep 2021; 11:14357. [PMID: 34257357 PMCID: PMC8277883 DOI: 10.1038/s41598-021-93776-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 06/25/2021] [Indexed: 11/16/2022] Open
Abstract
Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2-4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.
Collapse
Affiliation(s)
- Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Joël Guérette
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Guillaume Lalonde-Beaudoin
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Fraser W Smith
- University of East Anglia School of Psychology, Norwich, NR4 7TJ, UK
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada.
| |
Collapse
|
26
|
Guyer JJ, Briñol P, Vaughan-Johnston TI, Fabrigar LR, Moreno L, Petty RE. Paralinguistic Features Communicated through Voice can Affect Appraisals of Confidence and Evaluative Judgments. JOURNAL OF NONVERBAL BEHAVIOR 2021; 45:479-504. [PMID: 34744233 PMCID: PMC8553728 DOI: 10.1007/s10919-021-00374-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/25/2021] [Indexed: 11/07/2022]
Abstract
This article unpacks the basic mechanisms by which paralinguistic features communicated through the voice can affect evaluative judgments and persuasion. Special emphasis is placed on exploring the rapidly emerging literature on vocal features linked to appraisals of confidence (e.g., vocal pitch, intonation, speech rate, loudness, etc.), and their subsequent impact on information processing and meta-cognitive processes of attitude change. The main goal of this review is to advance understanding of the different psychological processes by which paralinguistic markers of confidence can affect attitude change, specifying the conditions under which they are more likely to operate. In sum, we highlight the importance of considering basic mechanisms of attitude change to predict when and why appraisals of paralinguistic markers of confidence can lead to more or less persuasion.
Collapse
Affiliation(s)
- Joshua J. Guyer
- Department of Social Psychology and Methodology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Pablo Briñol
- Department of Social Psychology and Methodology, Universidad Autónoma de Madrid, Madrid, Spain
| | | | | | - Lorena Moreno
- Department of Social Psychology and Methodology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Richard E. Petty
- Department of Psychology, The Ohio State University, Columbus, USA
| |
Collapse
|
27
|
Jonauskaite D, Sutton A, Cristianini N, Mohr C. English colour terms carry gender and valence biases: A corpus study using word embeddings. PLoS One 2021; 16:e0251559. [PMID: 34061875 PMCID: PMC8168888 DOI: 10.1371/journal.pone.0251559] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 04/29/2021] [Indexed: 11/19/2022] Open
Abstract
In Western societies, the stereotype prevails that pink is for girls and blue is for boys. A third possible gendered colour is red. While liked by women, it represents power, stereotypically a masculine characteristic. Empirical studies confirmed such gendered connotations when testing colour-emotion associations or colour preferences in males and females. Furthermore, empirical studies demonstrated that pink is a positive colour, blue is mainly a positive colour, and red is both a positive and a negative colour. Here, we assessed if the same valence and gender connotations appear in widely available written texts (Wikipedia and newswire articles). Using a word embedding method (GloVe), we extracted gender and valence biases for blue, pink, and red, as well as for the remaining basic colour terms from a large English-language corpus containing six billion words. We found and confirmed that pink was biased towards femininity and positivity, and blue was biased towards positivity. We found no strong gender bias for blue, and no strong gender or valence biases for red. For the remaining colour terms, we only found that green, white, and brown were positively biased. Our finding on pink shows that writers of widely available English texts use this colour term to convey femininity. This gendered communication reinforces the notion that results from research studies find their analogue in real word phenomena. Other findings were either consistent or inconsistent with results from research studies. We argue that widely available written texts have biases on their own, because they have been filtered according to context, time, and what is appropriate to be reported.
Collapse
Affiliation(s)
| | - Adam Sutton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Nello Cristianini
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Christine Mohr
- Institute of Psychology, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
28
|
|
29
|
Lima CF, Arriaga P, Anikin A, Pires AR, Frade S, Neves L, Scott SK. Authentic and posed emotional vocalizations trigger distinct facial responses. Cortex 2021; 141:280-292. [PMID: 34102411 DOI: 10.1016/j.cortex.2021.04.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/21/2021] [Accepted: 04/27/2021] [Indexed: 11/28/2022]
Abstract
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.
Collapse
Affiliation(s)
- César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France; Division of Cognitive Science, Lund University, Lund, Sweden
| | - Ana Rita Pires
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sofia Frade
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Leonor Neves
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
30
|
Hall A, Kawai K, Graber K, Spencer G, Roussin C, Weinstock P, Volk MS. Acoustic analysis of surgeons’ voices to assess change in the stress response during surgical in situ simulation. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2021; 7:471-477. [DOI: 10.1136/bmjstel-2020-000727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/23/2021] [Indexed: 11/04/2022]
Abstract
IntroductionStress may serve as an adjunct (challenge) or hindrance (threat) to the learning process. Determining the effect of an individual’s response to situational demands in either a real or simulated situation may enable optimisation of the learning environment. Studies of acoustic analysis suggest that mean fundamental frequency and formant frequencies of voice vary with an individual’s response during stressful events. This hypothesis is reviewed within the otolaryngology (ORL) simulation environment to assess whether acoustic analysis could be used as a tool to determine participants’ stress response and cognitive load in medical simulation. Such an assessment could lead to optimisation of the learning environment.MethodologyORL simulation scenarios were performed to teach the participants teamwork and refine clinical skills. Each was performed in an actual operating room (OR) environment (in situ) with a multidisciplinary team consisting of ORL surgeons, OR nurses and anaesthesiologists. Ten of the scenarios were led by an ORL attending and ten were led by an ORL fellow. The vocal communication of each of the 20 individual leaders was analysed using a long-term pitch analysis PRAAT software (autocorrelation method) to obtain mean fundamental frequency (F0) and first four formant frequencies (F1, F2, F3 and F4). In reviewing individual scenarios, each leader’s voice was analysed during a non-stressful environment (WHO sign-out procedure) and compared with their voice during a stressful portion of the scenario (responding to deteriorating oxygen saturations in the manikin).ResultsThe mean unstressed F0 for the male voice was 161.4 Hz and for the female voice was 217.9 Hz. The mean fundamental frequency of speech in the ORL fellow (lead surgeon) group increased by 34.5 Hz between the scenario’s baseline and stressful portions. This was significantly different to the mean change of −0.5 Hz noted in the attending group (p=0.01). No changes were seen in F1, F2, F3 or F4.ConclusionsThis study demonstrates a method of acoustic analysis of the voices of participants taking part in medical simulations. It suggests acoustic analysis of participants may offer a simple, non-invasive, non-intrusive adjunct in evaluating and titrating the stress response during simulation.
Collapse
|
31
|
Hobaiter C. A Very Long Look Back at Language Development. MINNESOTA SYMPOSIA ON CHILD PSYCHOLOGY 2021. [DOI: 10.1002/9781119684527.ch1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
32
|
Marin Vargas A, Cominelli L, Dell’Orletta F, Scilingo EP. Verbal Communication in Robotics: A Study on Salient Terms, Research Fields and Trends in the Last Decades Based on a Computational Linguistic Analysis. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2020.591164] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Verbal communication is an expanding field in robotics showing a significant increase in both the industrial and research field. The application of verbal communication in robotics aims to reach a natural human-like interaction with robots. In this study, we investigated how salient terms related to verbal communication in robotics have evolved over the years, what are the topics that recur in the related literature, and what are their trends. The study is based on a computational linguistic analysis conducted on a database of 7,435 scientific publications over the last 2 decades. This comprehensive dataset was extracted from the Scopus database using specific key-words. Our results show how relevant terms of verbal communication evolved, which are the main coherent topics and how they have changed over the years. We highlighted positive and negative trends for the most coherent topics and the distribution over the years for the most significant ones. In particular, verbal communication resulted in being highly relevant for social robotics. Potentially, achieving natural verbal communication with a robot can have a great impact on the scientific, societal, and economic role of robotics in the future.
Collapse
|
33
|
Direito B, Ramos M, Pereira J, Sayal A, Sousa T, Castelo-Branco M. Directly Exploring the Neural Correlates of Feedback-Related Reward Saliency and Valence During Real-Time fMRI-Based Neurofeedback. Front Hum Neurosci 2021; 14:578119. [PMID: 33613202 PMCID: PMC7893090 DOI: 10.3389/fnhum.2020.578119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 12/28/2020] [Indexed: 01/04/2023] Open
Abstract
Introduction: The potential therapeutic efficacy of real-time fMRI Neurofeedback has received increasing attention in a variety of psychological and neurological disorders and as a tool to probe cognition. Despite its growing popularity, the success rate varies significantly, and the underlying neural mechanisms are still a matter of debate. The question whether an individually tailored framework positively influences neurofeedback success remains largely unexplored. Methods: To address this question, participants were trained to modulate the activity of a target brain region, the visual motion area hMT+/V5, based on the performance of three imagery tasks with increasing complexity: imagery of a static dot, imagery of a moving dot with two and with four opposite directions. Participants received auditory feedback in the form of vocalizations with either negative, neutral or positive valence. The modulation thresholds were defined for each participant according to the maximum BOLD signal change of their target region during the localizer run. Results: We found that 4 out of 10 participants were able to modulate brain activity in this region-of-interest during neurofeedback training. This rate of success (40%) is consistent with the neurofeedback literature. Whole-brain analysis revealed the recruitment of specific cortical regions involved in cognitive control, reward monitoring, and feedback processing during neurofeedback training. Individually tailored feedback thresholds did not correlate with the success level. We found region-dependent neuromodulation profiles associated with task complexity and feedback valence. Discussion: Findings support the strategic role of task complexity and feedback valence on the modulation of the network nodes involved in monitoring and feedback control, key variables in neurofeedback frameworks optimization. Considering the elaborate design, the small sample size here tested (N = 10) impairs external validity in comparison to our previous studies. Future work will address this limitation. Ultimately, our results contribute to the discussion of individually tailored solutions, and justify further investigation concerning volitional control over brain activity.
Collapse
Affiliation(s)
- Bruno Direito
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - Manuel Ramos
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal
| | - João Pereira
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - Alexandre Sayal
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal.,Siemens Healthineers, Lisbon, Portugal
| | - Teresa Sousa
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal.,Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
34
|
Cortes DS, Tornberg C, Bänziger T, Elfenbein HA, Fischer H, Laukka P. Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations. Sci Rep 2021; 11:2647. [PMID: 33514829 PMCID: PMC7846600 DOI: 10.1038/s41598-021-82135-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 01/15/2021] [Indexed: 12/20/2022] Open
Abstract
Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.
Collapse
Affiliation(s)
- Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden.
| | | | - Tanja Bänziger
- Department of Psychology, Mid Sweden University, Östersund, Sweden
| | | | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
35
|
Scherer KR. Comment: Advances in Studying the Vocal Expression of Emotion: Current Contributions and Further Options. EMOTION REVIEW 2021. [DOI: 10.1177/1754073920949671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
I consider the five contributions in this special section as evidence that the research area dealing with the vocal expression of emotion is advancing rapidly, both in terms of the number of pertinent empirical studies and with respect to an ever increasing sophistication of methodology. I provide some suggestions on promising areas for future interdisciplinary research, including work on emotion expression in singing and the potential of vocal symptoms of emotional disorder. As to the popular discussion of the respective role of universality versus language/culture differences, I suggest to move on from exclusively studying the accuracy of recognition in judgment studies to a more differentiated approach adding production aspects, taking into account the multiple vocal and acoustic features that interact to communicate emotion.
Collapse
Affiliation(s)
- Klaus R. Scherer
- Department of Psychology, University of Geneva, Switzerland
- Department of Psychology, Ludwig-Maximilians-University of Munich, Germany
| |
Collapse
|
36
|
Cowen AS, Keltner D, Schroff F, Jou B, Adam H, Prasad G. Sixteen facial expressions occur in similar contexts worldwide. Nature 2021; 589:251-257. [PMID: 33328631 DOI: 10.1038/s41586-020-3037-7] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 10/30/2020] [Indexed: 01/29/2023]
Abstract
Understanding the degree to which human facial expressions co-vary with specific social contexts across cultures is central to the theory that emotions enable adaptive responses to important challenges and opportunities1-6. Concrete evidence linking social context to specific facial expressions is sparse and is largely based on survey-based approaches, which are often constrained by language and small sample sizes7-13. Here, by applying machine-learning methods to real-world, dynamic behaviour, we ascertain whether naturalistic social contexts (for example, weddings or sporting competitions) are associated with specific facial expressions14 across different cultures. In two experiments using deep neural networks, we examined the extent to which 16 types of facial expression occurred systematically in thousands of contexts in 6 million videos from 144 countries. We found that each kind of facial expression had distinct associations with a set of contexts that were 70% preserved across 12 world regions. Consistent with these associations, regions varied in how frequently different facial expressions were produced as a function of which contexts were most salient. Our results reveal fine-grained patterns in human facial expressions that are preserved across the modern world.
Collapse
Affiliation(s)
- Alan S Cowen
- Department of Psychology, University of California Berkeley, Berkeley, CA, USA. .,Google Research, Mountain View, CA, USA.
| | - Dacher Keltner
- Department of Psychology, University of California Berkeley, Berkeley, CA, USA
| | | | | | | | | |
Collapse
|
37
|
Azari B, Westlin C, Satpute AB, Hutchinson JB, Kragel PA, Hoemann K, Khan Z, Wormwood JB, Quigley KS, Erdogmus D, Dy J, Brooks DH, Barrett LF. Comparing supervised and unsupervised approaches to emotion categorization in the human brain, body, and subjective experience. Sci Rep 2020; 10:20284. [PMID: 33219270 PMCID: PMC7679385 DOI: 10.1038/s41598-020-77117-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 09/16/2020] [Indexed: 12/05/2022] Open
Abstract
Machine learning methods provide powerful tools to map physical measurements to scientific categories. But are such methods suitable for discovering the ground truth about psychological categories? We use the science of emotion as a test case to explore this question. In studies of emotion, researchers use supervised classifiers, guided by emotion labels, to attempt to discover biomarkers in the brain or body for the corresponding emotion categories. This practice relies on the assumption that the labels refer to objective categories that can be discovered. Here, we critically examine this approach across three distinct datasets collected during emotional episodes—measuring the human brain, body, and subjective experience—and compare supervised classification solutions with those from unsupervised clustering in which no labels are assigned to the data. We conclude with a set of recommendations to guide researchers towards meaningful, data-driven discoveries in the science of emotion and beyond.
Collapse
Affiliation(s)
- Bahar Azari
- Department of Electrical & Computer Engineering, College of Engineering, Northeastern University, Boston, MA, USA
| | - Christiana Westlin
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA
| | - Ajay B Satpute
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA
| | | | - Philip A Kragel
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA
| | - Katie Hoemann
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA
| | - Zulqarnain Khan
- Department of Electrical & Computer Engineering, College of Engineering, Northeastern University, Boston, MA, USA
| | - Jolie B Wormwood
- Department of Psychology, University of New Hampshire, Durham, NH, USA
| | - Karen S Quigley
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA.,Edith Nourse Rogers Veterans Hospital, Bedford, MA, USA
| | - Deniz Erdogmus
- Department of Electrical & Computer Engineering, College of Engineering, Northeastern University, Boston, MA, USA
| | - Jennifer Dy
- Department of Electrical & Computer Engineering, College of Engineering, Northeastern University, Boston, MA, USA
| | - Dana H Brooks
- Department of Electrical & Computer Engineering, College of Engineering, Northeastern University, Boston, MA, USA.
| | - Lisa Feldman Barrett
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA. .,Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
38
|
Smartphones and psychological well-being in China: Examining direct and indirect relationships through social support and relationship satisfaction. TELEMATICS AND INFORMATICS 2020. [DOI: 10.1016/j.tele.2020.101469] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
39
|
Cowen AS, Keltner D. Universal facial expressions uncovered in art of the ancient Americas: A computational approach. SCIENCE ADVANCES 2020; 6:eabb1005. [PMID: 32875109 PMCID: PMC7438103 DOI: 10.1126/sciadv.abb1005] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 07/08/2020] [Indexed: 05/08/2023]
Abstract
Central to the study of emotion is evidence concerning its universality, particularly the degree to which emotional expressions are similar across cultures. Here, we present an approach to studying the universality of emotional expression that rules out cultural contact and circumvents potential biases in survey-based methods: A computational analysis of apparent facial expressions portrayed in artwork created by members of cultures isolated from Western civilization. Using data-driven methods, we find that facial expressions depicted in 63 sculptures from the ancient Americas tend to accord with Western expectations for emotions that unfold in specific social contexts. Ancient American sculptures tend to portray at least five facial expressions in contexts predicted by Westerners, including "pain" in torture, "determination"/"strain" in heavy lifting, "anger" in combat, "elation" in social touch, and "sadness" in defeat-supporting the universality of these expressions.
Collapse
Affiliation(s)
- Alan S. Cowen
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
40
|
Horikawa T, Cowen AS, Keltner D, Kamitani Y. The Neural Representation of Visually Evoked Emotion Is High-Dimensional, Categorical, and Distributed across Transmodal Brain Regions. iScience 2020; 23:101060. [PMID: 32353765 PMCID: PMC7191651 DOI: 10.1016/j.isci.2020.101060] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 03/11/2020] [Accepted: 04/09/2020] [Indexed: 12/12/2022] Open
Abstract
Central to our subjective lives is the experience of different emotions. Recent behavioral work mapping emotional responses to 2,185 videos found that people experience upward of 27 distinct emotions occupying a high-dimensional space, and that emotion categories, more so than affective dimensions (e.g., valence), organize self-reports of subjective experience. Here, we sought to identify the neural substrates of this high-dimensional space of emotional experience using fMRI responses to all 2,185 videos. Our analyses demonstrated that (1) dozens of video-evoked emotions were accurately predicted from fMRI patterns in multiple brain regions with different regional configurations for individual emotions; (2) emotion categories better predicted cortical and subcortical responses than affective dimensions, outperforming visual and semantic covariates in transmodal regions; and (3) emotion-related fMRI responses had a cluster-like organization efficiently characterized by distinct categories. These results support an emerging theory of the high-dimensional emotion space, illuminating its neural foundations distributed across transmodal regions.
Collapse
Affiliation(s)
- Tomoyasu Horikawa
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Hikaridai, Seika, Soraku, Kyoto, 619-0288, Japan.
| | - Alan S Cowen
- Department of Psychology, University of California, Berkeley, CA 94720-1500, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, CA 94720-1500, USA
| | - Yukiyasu Kamitani
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Hikaridai, Seika, Soraku, Kyoto, 619-0288, Japan; Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.
| |
Collapse
|
41
|
Cheshin A. The Impact of Non-normative Displays of Emotion in the Workplace: How Inappropriateness Shapes the Interpersonal Outcomes of Emotional Displays. Front Psychol 2020; 11:6. [PMID: 32116884 PMCID: PMC7033655 DOI: 10.3389/fpsyg.2020.00006] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 01/03/2020] [Indexed: 11/17/2022] Open
Abstract
When it comes to evaluating emotions as either “good” or “bad,” everyday beliefs regarding emotions rely mostly on their hedonic features—does the emotion feel good to the person experiencing the emotion? However, emotions are not only felt inwardly; they are also displayed outwardly, and others’ responses to an emotional display can produce asymmetric outcomes (i.e., even emotions that feel good to the displayer can lead to negative outcomes for the displayer and others). Focusing on organizational settings, this manuscript reviews the literature on the outcomes of emotional expressions and argues that the evidence points to perceived (in)appropriateness of emotional displays as key to their consequences: emotional displays that are deemed inappropriate generate disadvantageous outcomes for the displayer, and at times also the organization. Drawing on relevant theoretical models [Emotions as Social Information (EASI) theory, the Dual Threshold Model of Anger, and Asymmetrical Outcomes of Emotions], the paper highlights three broad and interrelated reasons why emotion displays could be deemed unfitting and inappropriate: (1) characteristics of the displayer (e.g., status, gender); (2) characteristics of the display (e.g., intensity, mode); and (3) characteristics of the context (e.g., national or organizational culture, topic of interaction). The review focuses on three different emotions—anger, sadness, and happiness—which differ in their valence based on how they feel to the displayer, but can yield different interpersonal outcomes. In conclusion, the paper argues that inappropriateness must be judged separately from whether an emotional display is civil (i.e., polite and courteous) or uncivil (i.e., rude, discourteous, and offensive). Testable propositions are presented, as well as suggested future research directions.
Collapse
Affiliation(s)
- Arik Cheshin
- Department of Human Services, University of Haifa, Haifa, Israel
| |
Collapse
|
42
|
Cowen AS, Fang X, Sauter D, Keltner D. What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures. Proc Natl Acad Sci U S A 2020; 117:1924-1934. [PMID: 31907316 PMCID: PMC6995018 DOI: 10.1073/pnas.1910704117] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
What is the nature of the feelings evoked by music? We investigated how people represent the subjective experiences associated with Western and Chinese music and the form in which these representational processes are preserved across different cultural groups. US (n = 1,591) and Chinese (n = 1,258) participants listened to 2,168 music samples and reported on the specific feelings (e.g., "angry," "dreamy") or broad affective features (e.g., valence, arousal) that they made individuals feel. Using large-scale statistical tools, we uncovered 13 distinct types of subjective experience associated with music in both cultures. Specific feelings such as "triumphant" were better preserved across the 2 cultures than levels of valence and arousal, contrasting with theoretical claims that valence and arousal are building blocks of subjective experience. This held true even for music selected on the basis of its valence and arousal levels and for traditional Chinese music. Furthermore, the feelings associated with music were found to occupy continuous gradients, contradicting discrete emotion theories. Our findings, visualized within an interactive map (https://www.ocf.berkeley.edu/∼acowen/music.html) reveal a complex, high-dimensional space of subjective experience associated with music in multiple cultures. These findings can inform inquiries ranging from the etiology of affective disorders to the neurological basis of emotion.
Collapse
Affiliation(s)
- Alan S Cowen
- Department of Psychology, University of California, Berkeley, CA 94720;
| | - Xia Fang
- Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands
- Department of Psychology, York University, Toronto, ON M3J 1P3, Canada
| | - Disa Sauter
- Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, CA 94720
| |
Collapse
|
43
|
Cowen A, Sauter D, Tracy JL, Keltner D. Mapping the Passions: Toward a High-Dimensional Taxonomy of Emotional Experience and Expression. Psychol Sci Public Interest 2019; 20:69-90. [PMID: 31313637 PMCID: PMC6675572 DOI: 10.1177/1529100619850176] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
What would a comprehensive atlas of human emotions include? For 50 years, scientists have sought to map emotion-related experience, expression, physiology, and recognition in terms of the "basic six"-anger, disgust, fear, happiness, sadness, and surprise. Claims about the relationships between these six emotions and prototypical facial configurations have provided the basis for a long-standing debate over the diagnostic value of expression (for review and latest installment in this debate, see Barrett et al., p. 1). Building on recent empirical findings and methodologies, we offer an alternative conceptual and methodological approach that reveals a richer taxonomy of emotion. Dozens of distinct varieties of emotion are reliably distinguished by language, evoked in distinct circumstances, and perceived in distinct expressions of the face, body, and voice. Traditional models-both the basic six and affective-circumplex model (valence and arousal)-capture a fraction of the systematic variability in emotional response. In contrast, emotion-related responses (e.g., the smile of embarrassment, triumphant postures, sympathetic vocalizations, blends of distinct expressions) can be explained by richer models of emotion. Given these developments, we discuss why tests of a basic-six model of emotion are not tests of the diagnostic value of facial expression more generally. Determining the full extent of what facial expressions can tell us, marginally and in conjunction with other behavioral and contextual cues, will require mapping the high-dimensional, continuous space of facial, bodily, and vocal signals onto richly multifaceted experiences using large-scale statistical modeling and machine-learning methods.
Collapse
Affiliation(s)
- Alan Cowen
- Department of Psychology, University of California, Berkeley
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam
| | | | - Dacher Keltner
- Department of Psychology, University of California, Berkeley
| |
Collapse
|
44
|
Cowen AS, Keltner D. What the face displays: Mapping 28 emotions conveyed by naturalistic expression. ACTA ACUST UNITED AC 2019; 75:349-364. [PMID: 31204816 DOI: 10.1037/amp0000488] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
What emotions do the face and body express? Guided by new conceptual and quantitative approaches (Cowen, Elfenbein, Laukka, & Keltner, 2018; Cowen & Keltner, 2017, 2018), we explore the taxonomy of emotion recognized in facial-bodily expression. Participants (N = 1,794; 940 female, ages 18-76 years) judged the emotions captured in 1,500 photographs of facial-bodily expression in terms of emotion categories, appraisals, free response, and ecological validity. We find that facial-bodily expressions can reliably signal at least 28 distinct categories of emotion that occur in everyday life. Emotion categories, more so than appraisals such as valence and arousal, organize emotion recognition. However, categories of emotion recognized in naturalistic facial and bodily behavior are not discrete but bridged by smooth gradients that correspond to continuous variations in meaning. Our results support a novel view that emotions occupy a high-dimensional space of categories bridged by smooth gradients of meaning. They offer an approximation of a taxonomy of facial-bodily expressions, visualized within an online interactive map. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
45
|
Nordström H, Laukka P. The time course of emotion recognition in speech and music. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3058. [PMID: 31153307 DOI: 10.1121/1.5108601] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 04/25/2019] [Indexed: 06/09/2023]
Abstract
The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for ≤100 ms stimuli for anger, happiness, neutral, and sadness, and for ≤250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions.
Collapse
Affiliation(s)
- Henrik Nordström
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| |
Collapse
|