1
|
Chen Y, Wang T, Ding H. Effect of Age and Gender on Categorical Perception of Vocal Emotion Under Tonal Language Background. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4567-4583. [PMID: 39418571 DOI: 10.1044/2024_jslhr-23-00716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
PURPOSE Categorical perception (CP) manifests in various aspects of human cognition. While there is mounting evidence for CP in facial emotions, CP in vocal emotions remains understudied. The current study attempted to test whether individuals with a tonal language background perceive vocal emotions categorically and to examine how factors such as gender and age influence the plasticity of these perceptual categories. METHOD This study examined the identification and discrimination performance of 24 Mandarin-speaking children (14 boys and 10 girls) and 32 adults (16 males and 16 females) when they were presented with three vocal emotion continua. Speech stimuli in each continuum consisted of 11 resynthesized Mandarin disyllabic words. RESULTS CP phenomena were detected when Mandarin participants perceived vocal emotions. We further found the modulating effect of age and gender in vocal emotion categorization. CONCLUSIONS Our results demonstrate for the first time that a categorical strategy is used by Mandarin speakers when perceiving vocal emotions. Furthermore, our findings reveal that the categorization ability of vocal emotions follows a prolonged course of development and the maturation patterns differ across genders. This study opens a promising line of research for investigating how sensory features are mapped to higher order perception and provides implications for our understanding of clinical populations characterized by altered emotional processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27204057.
Collapse
Affiliation(s)
- Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Ting Wang
- School of Foreign Languages, Tongji University, Shanghai, China
- Center for Speech and Language Processing, Tongji University, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| |
Collapse
|
2
|
Neuenswander KL, Goodale BM, Bryant GA, Johnson KL. Sex ratios in vocal ensembles affect perceptions of threat and belonging. Sci Rep 2024; 14:14575. [PMID: 38914752 PMCID: PMC11196271 DOI: 10.1038/s41598-024-65535-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/20/2024] [Indexed: 06/26/2024] Open
Abstract
People often interact with groups (i.e., ensembles) during social interactions. Given that group-level information is important in navigating social environments, we expect perceptual sensitivity to aspects of groups that are relevant for personal threat as well as social belonging. Most ensemble perception research has focused on visual ensembles, with little research looking at auditory or vocal ensembles. Across four studies, we present evidence that (i) perceivers accurately extract the sex composition of a group from voices alone, (ii) judgments of threat increase concomitantly with the number of men, and (iii) listeners' sense of belonging depends on the number of same-sex others in the group. This work advances our understanding of social cognition, interpersonal communication, and ensemble coding to include auditory information, and reveals people's ability to extract relevant social information from brief exposures to vocalizing groups.
Collapse
Affiliation(s)
- Kelsey L Neuenswander
- Department of Communication, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA, 90095, USA.
| | | | - Gregory A Bryant
- Department of Communication, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA, 90095, USA
| | - Kerri L Johnson
- Department of Communication, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA, 90095, USA
- Department of Psychology, University of California, Los Angeles, USA
| |
Collapse
|
3
|
Goel S, Jara-Ettinger J, Ong DC, Gendron M. Face and context integration in emotion inference is limited and variable across categories and individuals. Nat Commun 2024; 15:2443. [PMID: 38499519 PMCID: PMC10948792 DOI: 10.1038/s41467-024-46670-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 03/05/2024] [Indexed: 03/20/2024] Open
Abstract
The ability to make nuanced inferences about other people's emotional states is central to social functioning. While emotion inferences can be sensitive to both facial movements and the situational context that they occur in, relatively little is understood about when these two sources of information are integrated across emotion categories and individuals. In a series of studies, we use one archival and five empirical datasets to demonstrate that people could be integrating, but that emotion inferences are just as well (and sometimes better) captured by knowledge of the situation alone, while isolated facial cues are insufficient. Further, people integrate facial cues more for categories for which they most frequently encounter facial expressions in everyday life (e.g., happiness). People are also moderately stable over time in their reliance on situational cues and integration of cues and those who reliably utilize situation cues more also have better situated emotion knowledge. These findings underscore the importance of studying variability in reliance on and integration of cues.
Collapse
Affiliation(s)
- Srishti Goel
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA
- Wu Tsai Institute, Yale University, 100 College St, New Haven, CT, USA
| | - Desmond C Ong
- Department of Psychology, The University of Texas at Austin, 108 E Dean Keeton St, Austin, TX, USA
| | - Maria Gendron
- Department of Psychology, Yale University, 100 College St, New Haven, CT, USA.
| |
Collapse
|
4
|
Li Y, Wang J, Liang J, Zhu C, Zhang Z, Luo W. The impact of degraded vision on emotional perception of audiovisual stimuli: An event-related potential study. Neuropsychologia 2024; 194:108785. [PMID: 38159799 DOI: 10.1016/j.neuropsychologia.2023.108785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 12/25/2023] [Accepted: 12/27/2023] [Indexed: 01/03/2024]
Abstract
Emotion recognition will be challenged for individuals when visual signals are degraded in real-life scenarios. Recently, researchers have conducted many studies on the distinct neural activity between clear and degraded audiovisual stimuli. These findings addressed the "how" question, but the precise stage of the distinct activity that occurred remains unknown. Therefore, it is crucial to use event-related potential (ERP) to explore the "when" question, just the time course of the neural activity of degraded audiovisual stimuli. In the present research, we established two conditions: clear auditory + degraded visual (AcVd) and clear auditory + clear visual (AcVc) multisensory conditions. We enlisted 31 participants to evaluate the emotional valence of audiovisual stimuli. The resulting data were analyzed using ERP in time domains and Microstate analysis. Current results suggest that degraded vision impairs the early-stage processing of audiovisual stimuli, with the superior parietal lobule (SPL) regulating audiovisual processing in a top-down fashion. Additionally, our findings indicate that negative and positive stimuli elicit greater EPN compared to neutral stimuli, pointing towards a subjective motivation-related attentional regulation. To sum up, in the early stage of emotional audiovisual processing, the degraded visual signal affected the perception of the physical attributes of audiovisual stimuli and had a further influence on emotion extraction processing, leading to the different regulation of top-down attention resources in the later stage.
Collapse
Affiliation(s)
- Yuchen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Institute of Psychology, Shandong Second Medical University, Weifang, 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China
| | - Jing Wang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China
| | - Junyu Liang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; School of Psychology, South China Normal University, Guangzhou, 510631, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China
| | - Chuanlin Zhu
- School of Educational Science, Yangzhou University, Yangzhou, 225002, China.
| | - Zhao Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Institute of Psychology, Shandong Second Medical University, Weifang, 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China.
| |
Collapse
|
5
|
Tompkinson J, Mileva M, Watt D, Mike Burton A. Perception of threat and intent to harm from vocal and facial cues. Q J Exp Psychol (Hove) 2024; 77:326-342. [PMID: 37020335 PMCID: PMC10798027 DOI: 10.1177/17470218231169952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 03/16/2023] [Accepted: 03/22/2023] [Indexed: 04/07/2023]
Abstract
What constitutes a "threatening tone of voice"? There is currently little research exploring how listeners infer threat, or the intention to cause harm, from speakers' voices. Here, we investigated the influence of key linguistic variables on these evaluations (Study 1). Results showed a trend for voices perceived to be lower in pitch, particularly those of male speakers, to be evaluated as sounding more threatening and conveying greater intent to harm. We next investigated the evaluation of multimodal stimuli comprising voices and faces varying in perceived dominance (Study 2). Visual information about the speaker's face had a significant effect on threat and intent ratings. In both experiments, we observed a relatively low level of agreement among individual listeners' evaluations, emphasising idiosyncrasy in the ways in which threat and intent-to-harm are perceived. This research provides a basis for the perceptual experience of a "threatening tone of voice," along with an exploration of vocal and facial cue integration in social evaluation.
Collapse
Affiliation(s)
- James Tompkinson
- Aston Institute for Forensic Linguistics, College of Business and Social Sciences, Aston University, Birmingham, UK
| | - Mila Mileva
- School of Psychology, University of Plymouth, Plymouth, UK
| | - Dominic Watt
- Department of Language and Linguistic Science, University of York, York, UK
| | - A Mike Burton
- Department of Psychology, University of York, York, UK
| |
Collapse
|
6
|
Ohshima S, Koeda M, Kawai W, Saito H, Niioka K, Okuno K, Naganawa S, Hama T, Kyutoku Y, Dan I. Cerebral response to emotional working memory based on vocal cues: an fNIRS study. Front Hum Neurosci 2023; 17:1160392. [PMID: 38222093 PMCID: PMC10785654 DOI: 10.3389/fnhum.2023.1160392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/28/2023] [Indexed: 01/16/2024] Open
Abstract
Introduction Humans mainly utilize visual and auditory information as a cue to infer others' emotions. Previous neuroimaging studies have shown the neural basis of memory processing based on facial expression, but few studies have examined it based on vocal cues. Thus, we aimed to investigate brain regions associated with emotional judgment based on vocal cues using an N-back task paradigm. Methods Thirty participants performed N-back tasks requiring them to judge emotion or gender from voices that contained both emotion and gender information. During these tasks, cerebral hemodynamic response was measured using functional near-infrared spectroscopy (fNIRS). Results The results revealed that during the Emotion 2-back task there was significant activation in the frontal area, including the right precentral and inferior frontal gyri, possibly reflecting the function of an attentional network with auditory top-down processing. In addition, there was significant activation in the ventrolateral prefrontal cortex, which is known to be a major part of the working memory center. Discussion These results suggest that, compared to judging the gender of voice stimuli, when judging emotional information, attention is directed more deeply and demands for higher-order cognition, including working memory, are greater. We have revealed for the first time the specific neural basis for emotional judgments based on vocal cues compared to that for gender judgments based on vocal cues.
Collapse
Affiliation(s)
- Saori Ohshima
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Michihiko Koeda
- Department of Neuropsychiatry, Graduate School of Medicine, Nippon Medical School, Bunkyo, Japan
- Department of Mental Health, Nippon Medical School Tama Nagayama Hospital, Tama, Japan
| | - Wakana Kawai
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Hikaru Saito
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Kiyomitsu Niioka
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Koki Okuno
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Sho Naganawa
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Tomoko Hama
- Department of Medical Technology, Ehime Prefectural University of Health Sciences, Iyo-gun, Japan
- Department of Clinical Laboratory Medicine, Faculty of Health Science Technology, Bunkyo Gakuin University, Tokyo, Japan
| | - Yasushi Kyutoku
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| | - Ippeita Dan
- Applied Cognitive Neuroscience Laboratory, Faculty of Science and Engineering, Chuo University, Bunkyo, Japan
| |
Collapse
|
7
|
Ziereis A, Schacht A. Gender congruence and emotion effects in cross-modal associative learning: Insights from ERPs and pupillary responses. Psychophysiology 2023; 60:e14380. [PMID: 37387451 DOI: 10.1111/psyp.14380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/01/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023]
Abstract
Social and emotional cues from faces and voices are highly relevant and have been reliably demonstrated to attract attention involuntarily. However, there are mixed findings as to which degree associating emotional valence to faces occurs automatically. In the present study, we tested whether inherently neutral faces gain additional relevance by being conditioned with either positive, negative, or neutral vocal affect bursts. During learning, participants performed a gender-matching task on face-voice pairs without explicit emotion judgments of the voices. In the test session on a subsequent day, only the previously associated faces were presented and had to be categorized regarding gender. We analyzed event-related potentials (ERPs), pupil diameter, and response times (RTs) of N = 32 subjects. Emotion effects were found in auditory ERPs and RTs during the learning session, suggesting that task-irrelevant emotion was automatically processed. However, ERPs time-locked to the conditioned faces were mainly modulated by the task-relevant information, that is, the gender congruence of the face and voice, but not by emotion. Importantly, these ERP and RT effects of learned congruence were not limited to learning but extended to the test session, that is, after removing the auditory stimuli. These findings indicate successful associative learning in our paradigm, but it did not extend to the task-irrelevant dimension of emotional relevance. Therefore, cross-modal associations of emotional relevance may not be completely automatic, even though the emotion was processed in the voice.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| |
Collapse
|
8
|
Cortês AB, Duarte JV, Castelo-Branco M. Hysteresis reveals a happiness bias effect in dynamic emotion recognition from ambiguous biological motion. J Vis 2023; 23:5. [PMID: 37962533 PMCID: PMC10653266 DOI: 10.1167/jov.23.13.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Accepted: 10/10/2023] [Indexed: 11/15/2023] Open
Abstract
Considering the nonlinear dynamic nature of emotion recognition, it is believed to be strongly dependent on temporal context. This can be investigated by resorting to the phenomenon of hysteresis, which features a form of serial dependence, entailed by continuous temporal stimulus trajectories. Under positive hysteresis, the percept remains stable in visual memory (persistence) while in negative hysteresis, it shifts earlier (adaptation) to the opposite interpretation. Here, we asked whether positive or negative hysteresis occurs in emotion recognition of inherently ambiguous biological motion, while testing for the controversial debate of a negative versus positive emotional bias. Participants (n = 22) performed a psychophysical experiment in which they were asked to judge stimulus transitions between two emotions, happiness and sadness, from an actor database, and report perceived emotion across time, from one emotion to the opposite as physical cues were continuously changing. Our results reveal perceptual hysteresis in ambiguous emotion recognition, with positive hysteresis (visual persistence) predominating. However, negative hysteresis (adaptation/fatigue) was also observed in particular in the direction from sadness to happiness. This demonstrates a positive (happiness) bias in emotion recognition in ambiguous biological motion recognition. Finally, the interplay between positive and negative hysteresis suggests an underlying competition between visual persistence and adaptation mechanisms during ambiguous emotion recognition.
Collapse
Affiliation(s)
- Ana Borges Cortês
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - João Valente Duarte
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
9
|
Ziereis A, Schacht A. Motivated attention and task relevance in the processing of cross-modally associated faces: Behavioral and electrophysiological evidence. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:1244-1266. [PMID: 37353712 PMCID: PMC10545602 DOI: 10.3758/s13415-023-01112-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/09/2023] [Indexed: 06/25/2023]
Abstract
It has repeatedly been shown that visually presented stimuli can gain additional relevance by their association with affective stimuli. Studies have shown effects of associated affect in event-related potentials (ERP) like the early posterior negativity (EPN), late positive complex (LPC), and even earlier components as the P1 or N170. However, findings are mixed as to the extent associated affect requires directed attention to the emotional quality of a stimulus and which ERP components are sensitive to task instructions during retrieval. In this preregistered study ( https://osf.io/ts4pb ), we tested cross-modal associations of vocal affect-bursts (positive, negative, neutral) to faces displaying neutral expressions in a flash-card-like learning task, in which participants studied face-voice pairs and learned to correctly assign them to each other. In the subsequent EEG test session, we applied both an implicit ("old-new") and explicit ("valence-classification") task to investigate whether the behavior at retrieval and neurophysiological activation of the affect-based associations were dependent on the type of motivated attention. We collected behavioral and neurophysiological data from 40 participants who reached the preregistered learning criterium. Results showed EPN effects of associated negative valence after learning and independent of the task. In contrast, modulations of later stages (LPC) by positive and negative associated valence were restricted to the explicit, i.e., valence-classification, task. These findings highlight the importance of the task at different processing stages and show that cross-modal affect can successfully be associated to faces.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, Goßlerstraße 14, 37073 Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, Goßlerstraße 14, 37073 Göttingen, Germany
| |
Collapse
|
10
|
Guerrini S, Hunter EM, Papagno C, MacPherson SE. Cognitive reserve and emotion recognition in the context of normal aging. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2023; 30:759-777. [PMID: 35634692 DOI: 10.1080/13825585.2022.2079603] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 05/13/2022] [Indexed: 06/15/2023]
Abstract
The Cognitive Reserve (CR) hypothesis accounts for individual differences in vulnerability to age- or pathological-related brain changes. It suggests lifetime influences (e.g., education) increase the effectiveness of cognitive processing in later life. While evidence suggests CR proxies predict cognitive performance in older age, it is less clear whether CR proxies attenuate age-related decline on social cognitive tasks. This study investigated the effect of CR proxies on unimodal and cross-modal emotion identification. Sixty-six older adults aged 60-78 years were assessed on CR proxies (Cognitive Reserve Index Questionnaire, NART), unimodal(faces only, voices only), and cross-modal (faces and voices combined) emotion recognition and executive function (Stroop Test). No CR proxy predicted performance on emotion recognition. However, NART IQ predicted performance on the Stroop test; higher NART IQ was associated with better performance. The current study suggests CR proxies do not predict performance on social cognition tests but do predict performance on cognitive tasks.
Collapse
Affiliation(s)
- Sofia Guerrini
- Dipartimento di Psicologia, Università degli studi di Milano-Bicocca, Milano, Italy
| | | | - Costanza Papagno
- CeRiN, Centro di Riabilitazione Neurocognitiva, CIMeC, Università di Trento, Rovereto, Italy
| | - Sarah E MacPherson
- Human Cognitive Neuroscience, Department of Psychology, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
11
|
Heffer N, Dennie E, Ashwin C, Petrini K, Karl A. Multisensory processing of emotional cues predicts intrusive memories after virtual reality trauma. VIRTUAL REALITY 2023; 27:2043-2057. [PMID: 37614716 PMCID: PMC10442266 DOI: 10.1007/s10055-023-00784-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 03/03/2023] [Indexed: 08/25/2023]
Abstract
Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00784-1.
Collapse
Affiliation(s)
- Naomi Heffer
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- School of Sciences, Bath Spa University, Bath, UK
| | - Emma Dennie
- Mood Disorders Centre, University of Exeter, Exeter, UK
| | - Chris Ashwin
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- Centre for Applied Autism Research (CAAR), Bath, UK
| | - Karin Petrini
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, UK
| | - Anke Karl
- Mood Disorders Centre, University of Exeter, Exeter, UK
| |
Collapse
|
12
|
Arias Sarah P, Hall L, Saitovitch A, Aucouturier JJ, Zilbovicius M, Johansson P. Pupil dilation reflects the dynamic integration of audiovisual emotional speech. Sci Rep 2023; 13:5507. [PMID: 37016041 PMCID: PMC10073148 DOI: 10.1038/s41598-023-32133-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 03/22/2023] [Indexed: 04/06/2023] Open
Abstract
Emotional speech perception is a multisensory process. When speaking with an individual we concurrently integrate the information from their voice and face to decode e.g., their feelings, moods, and emotions. However, the physiological reactions-such as the reflexive dilation of the pupil-associated to these processes remain mostly unknown. That is the aim of the current article, to investigate whether pupillary reactions can index the processes underlying the audiovisual integration of emotional signals. To investigate this question, we used an algorithm able to increase or decrease the smiles seen in a person's face or heard in their voice, while preserving the temporal synchrony between visual and auditory channels. Using this algorithm, we created congruent and incongruent audiovisual smiles, and investigated participants' gaze and pupillary reactions to manipulated stimuli. We found that pupil reactions can reflect emotional information mismatch in audiovisual speech. In our data, when participants were explicitly asked to extract emotional information from stimuli, the first fixation within emotionally mismatching areas (i.e., the mouth) triggered pupil dilation. These results reveal that pupil dilation can reflect the dynamic integration of audiovisual emotional speech and provide insights on how these reactions are triggered during stimulus perception.
Collapse
Affiliation(s)
- Pablo Arias Sarah
- Lund University Cognitive Science, Lund University, Lund, Sweden.
- STMS Lab, UMR 9912 (IRCAM/CNRS/SU), Paris, France.
- School of Neuroscience and Psychology, Glasgow University, Glasgow, UK.
| | - Lars Hall
- STMS Lab, UMR 9912 (IRCAM/CNRS/SU), Paris, France
| | - Ana Saitovitch
- U1000 Brain Imaging in Psychiatry, INSERM-CEA, Pediatric Radiology Service, Necker Enfants Malades Hospital, Paris V René Descartes University, Paris, France
| | - Jean-Julien Aucouturier
- Department of Robotics and Automation FEMTO-ST Institute (CNRS/Université de Bourgogne Franche Comté), Besançon, France
| | - Monica Zilbovicius
- U1000 Brain Imaging in Psychiatry, INSERM-CEA, Pediatric Radiology Service, Necker Enfants Malades Hospital, Paris V René Descartes University, Paris, France
| | | |
Collapse
|
13
|
Simonetti S, Davis C, Kim J. Older adults' emotion recognition: No auditory-visual benefit for less clear expressions. PLoS One 2022; 17:e0279822. [PMID: 36584136 PMCID: PMC9803091 DOI: 10.1371/journal.pone.0279822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 12/15/2022] [Indexed: 12/31/2022] Open
Abstract
The ability to recognise emotion from faces or voices appears to decline with advancing age. However, some studies have shown that emotion recognition of auditory-visual (AV) expressions is largely unaffected by age, i.e., older adults get a larger benefit from AV presentation than younger adults resulting in similar AV recognition levels. An issue with these studies is that they used well-recognised emotional expressions that are unlikely to generalise to real-life settings. To examine if an AV emotion recognition benefit generalizes across well and less well recognised stimuli, we conducted an emotion recognition study using expressions that had clear or unclear emotion information for both modalities, or clear visual, but unclear auditory information. Older (n = 30) and younger (n = 30) participants were tested on stimuli of anger, happiness, sadness, surprise, and disgust (expressed in spoken sentences) in auditory-only (AO), visual-only (VO), or AV format. Participants were required to respond by choosing one of 5 emotion options. Younger adults were more accurate in recognising emotions than older adults except for clear VO expressions. Younger adults showed an AV benefit even when unimodal recognition was poor. No such AV benefit was found for older adults; indeed, AV was worse than VO recognition when AO recognition was poor. Analyses of confusion responses indicated that older adults generated more confusion responses that were common between AO and VO conditions, than younger adults. We propose that older adults' poorer AV performance may be due to a combination of weak auditory emotion recognition and response uncertainty that resulted in a higher cognitive load.
Collapse
Affiliation(s)
- Simone Simonetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- Brain and Mind Centre, School of Psychology, University of Sydney, Sydney, Australia
| | - Chris Davis
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Jeesun Kim
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- * E-mail:
| |
Collapse
|
14
|
von Eiff CI, Frühholz S, Korth D, Guntinas-Lichius O, Schweinberger SR. Crossmodal benefits to vocal emotion perception in cochlear implant users. iScience 2022; 25:105711. [PMID: 36578321 PMCID: PMC9791346 DOI: 10.1016/j.isci.2022.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 10/17/2022] [Accepted: 11/29/2022] [Indexed: 12/03/2022] Open
Abstract
Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
Collapse
Affiliation(s)
- Celina Isabelle von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany,Corresponding author
| | - Sascha Frühholz
- Department of Psychology (Cognitive and Affective Neuroscience), Faculty of Arts and Social Sciences, University of Zurich, 8050 Zurich, Switzerland,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Daniela Korth
- Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
| | | | - Stefan Robert Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany
| |
Collapse
|
15
|
Albohn DN, Brandenburg JC, Kveraga K, Adams RB. The shared signal hypothesis: Facial and bodily expressions of emotion mutually inform one another. Atten Percept Psychophys 2022; 84:2271-2280. [PMID: 36045309 PMCID: PMC9509690 DOI: 10.3758/s13414-022-02548-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2022] [Indexed: 11/08/2022]
Abstract
Decades of research show that contextual information from the body, visual scene, and voices can facilitate judgments of facial expressions of emotion. To date, most research suggests that bodily expressions of emotion offer context for interpreting facial expressions, but not vice versa. The present research aimed to investigate the conditions under which mutual processing of facial and bodily displays of emotion facilitate and/or interfere with emotion recognition. In the current two studies, we examined whether body and face emotion recognition are enhanced through integration of shared emotion cues, and/or hindered through mixed signals (i.e., interference). We tested whether faces and bodies facilitate or interfere with emotion processing by pairing briefly presented (33 ms), backward-masked presentations of faces with supraliminally presented bodies (Experiment 1) and vice versa (Experiment 2). Both studies revealed strong support for integration effects, but not interference. Integration effects are most pronounced for low-emotional clarity facial and bodily expressions, suggesting that when more information is needed in one channel, the other channel is recruited to disentangle any ambiguity. That this occurs for briefly presented, backward-masked presentations reveals low-level visual integration of shared emotional signal value.
Collapse
Affiliation(s)
- Daniel N Albohn
- Booth School of Business, The University of Chicago, Chicago, IL, USA.
| | - Joseph C Brandenburg
- Department of School Psychology, The Pennsylvania State University, University Park, PA, USA
| | - Kestutis Kveraga
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Reginald B Adams
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
16
|
Chung-Fat-Yim A, Chen P, Chan AHD, Marian V. Audio-Visual Interactions During Emotion Processing in Bicultural Bilinguals. MOTIVATION AND EMOTION 2022; 46:719-734. [PMID: 36299445 PMCID: PMC9590621 DOI: 10.1007/s11031-022-09953-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 05/14/2022] [Accepted: 05/16/2022] [Indexed: 11/30/2022]
Abstract
Despite the growing number of bicultural bilinguals in the world, the way in which multisensory emotions are evaluated by bilinguals who identify with two or more cultures remains unknown. In the present study, Chinese-English bicultural bilinguals from Singapore viewed Asian or Caucasian faces and heard Mandarin or English speech, and evaluated the emotion from one of the two simultaneously-presented modalities. Reliance on the visual modality was greater when bicultural bilinguals processed Western audio-visual emotion information. Although no differences between modalities emerged when processing East-Asian audio-visual emotion information, correlations revealed that bicultural bilinguals increased their reliance on the auditory modality with more daily exposure to East-Asian cultures. Greater interference from the irrelevant modality was observed for Asian faces paired with English speech than for Caucasian faces paired with Mandarin speech. We conclude that processing of emotion in bicultural bilinguals is guided by culture-specific norms, and that familiarity influences how the emotions of those who speak a foreign language are perceived and evaluated.
Collapse
Affiliation(s)
| | - Peiyao Chen
- Swarthmore College, Swarthmore, Pennsylvania
| | - Alice H. D. Chan
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore
| | | |
Collapse
|
17
|
Thomas L, von Castell C, Hecht H. How facial masks alter the interaction of gaze direction, head orientation, and emotion recognition. Front Neurosci 2022; 16:937939. [PMID: 36213742 PMCID: PMC9533556 DOI: 10.3389/fnins.2022.937939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 09/01/2022] [Indexed: 11/22/2022] Open
Abstract
The COVID-19 pandemic has altered the way we interact with each other: mandatory mask-wearing obscures facial information that is crucial for emotion recognition. Whereas the influence of wearing a mask on emotion recognition has been repeatedly investigated, little is known about the impact on interaction effects among emotional signals and other social signals. Therefore, the current study sought to explore how gaze direction, head orientation, and emotional expression interact with respect to emotion perception, and how these interactions are altered by wearing a face mask. In two online experiments, we presented face stimuli from the Radboud Faces Database displaying different facial expressions (anger, fear, happiness, neutral, and sadness), gaze directions (−13°, 0°, and 13°), and head orientations (−45°, 0°, and 45°) – either without (Experiment 1) or with mask (Experiment 2). Participants categorized the displayed emotional expressions. Not surprisingly, masks impaired emotion recognition. Surprisingly, without the mask, emotion recognition was unaffected by averted head orientations and only slightly affected by gaze direction. The mask strongly interfered with this ability. The mask increased the influence of head orientation and gaze direction, in particular for the emotions that were poorly recognized with mask. The results suggest that in case of uncertainty due to ambiguity or absence of signals, we seem to unconsciously factor in extraneous information.
Collapse
|
18
|
Cui X, Jiang X, Ding H. Affective prosody guides facial emotion processing. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03528-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
Zuberer A, Schwarz L, Kreifelts B, Wildgruber D, Erb M, Fallgatter A, Scheffler K, Ethofer T. Neural Basis of Impaired Emotion Recognition in Adult Attention-Deficit/Hyperactivity Disorder. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2022; 7:680-687. [PMID: 33551283 DOI: 10.1016/j.bpsc.2020.11.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 11/23/2020] [Accepted: 11/23/2020] [Indexed: 12/28/2022]
Abstract
BACKGROUND Deficits in emotion recognition have been repeatedly documented in patients diagnosed with attention-deficit/hyperactivity disorder (ADHD), but their neural basis is unknown so far. METHODS In the current study, adult patients with ADHD (n = 44) and healthy control subjects (n = 43) underwent functional magnetic resonance imaging during explicit emotion recognition of stimuli expressing affective information in face, voice, or face-voice combinations. The employed experimental paradigm allowed us to delineate areas for processing audiovisual information based on their functional activation profile, including the bilateral posterior superior temporal gyrus/middle temporal gyrus, amygdala, medial prefrontal cortex, and precuneus, as well as the right posterior thalamus. RESULTS As expected, unbiased hit rates for correct classification of the expressed emotions were lower in patients with ADHD than in healthy control subjects irrespective of the presented sensory modality. This deficit at a behavioral level was accompanied by lower activation in patients with ADHD versus healthy control subjects in the cortex adjacent to the right superior temporal gyrus/middle temporal gyrus and the right posterior thalamus, which represent key areas for processing socially relevant signals and their integration across modalities. A cortical region adjacent to the right posterior superior temporal gyrus was the only brain region that showed a significant correlation between brain activation and emotion identification performance. CONCLUSIONS Altogether, these results provide the first evidence for a potential neural substrate of the observed impairments in emotion recognition in adults with ADHD.
Collapse
Affiliation(s)
- Agnieszka Zuberer
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany; Department of Psychiatry and Psychotherapy, Jena University Hospital, Jena, Germany.
| | - Lena Schwarz
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Benjamin Kreifelts
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Michael Erb
- Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
| | - Andreas Fallgatter
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany; Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Thomas Ethofer
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany; Department of Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
| |
Collapse
|
20
|
Subliminal audio-visual temporal congruency in music videos enhances perceptual pleasure. Neurosci Lett 2022; 779:136623. [DOI: 10.1016/j.neulet.2022.136623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/31/2022] [Accepted: 04/05/2022] [Indexed: 11/19/2022]
|
21
|
Liu J, Avery RJ, Kim JJ, Niederdeppe J. Maintaining a Fair Balance? Narrative and Non-Narrative Strategies in Televised Direct-to-Consumer Advertisements for Prescription Drugs Aired in the United States, 2003-2016. JOURNAL OF HEALTH COMMUNICATION 2022; 27:183-191. [PMID: 35593131 DOI: 10.1080/10810730.2022.2077863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Televised direct-to-consumer advertising for prescription drugs (hereafter DTCA) are among the most widespread forms of health communication encountered by American adults. DTCA shape public understanding of health problems and support the commercial interests of pharmaceutical companies by offering prescription drugs as a treatment option. The U.S. Food and Drug Administration (FDA) requires DTCA to present fair and balanced information regarding drug benefits versus risks. While narrative persuasion theory suggests that narratives can enhance persuasion by facilitating message processing and reducing counter-arguing, prior assessments of the balance between drug benefits versus risk information in DTCA have largely overlooked whether the ads employ narratives and/or other evidentiary strategies that may confer a persuasive advantage. This study content analyzed narrativity in DTCA aired on television between 2003 and 2016 for four different health conditions (heart disease, diabetes, depression, and osteoarthritis). Results showed that while televised DTCA spent more time discussing drug risks than drug benefits, both narratives and factual evidence were more frequently used to communicate drug benefits than drug risks. These findings raise concerns that narratives are strategically used by DTCA to highlight drug benefits rather than drug risks, which could lead to inaccurate perceptions of drug risks among viewers.
Collapse
Affiliation(s)
- Jiawei Liu
- Department of Communication, Cornell University, Ithaca, NY, USA
| | - Rosemary J Avery
- Department of Policy Analysis and Management, Cornell University, Ithaca, NY, USA
| | - Jungyon Janice Kim
- Department of Policy Analysis and Management, Cornell University, Ithaca, NY, USA
| | - Jeff Niederdeppe
- Department of Communication, Cornell University, Ithaca, NY, USA
- Jeb E. Brooks School of Public Policy, Cornell University, Ithaca, NY, USA
| |
Collapse
|
22
|
Heffer N, Gradidge M, Karl A, Ashwin C, Petrini K. High trait anxiety enhances optimal integration of auditory and visual threat cues. J Behav Ther Exp Psychiatry 2022; 74:101693. [PMID: 34563795 DOI: 10.1016/j.jbtep.2021.101693] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 07/12/2021] [Accepted: 09/14/2021] [Indexed: 11/26/2022]
Abstract
BACKGROUND Emotion perception is essential to human interaction and relies on effective integration of emotional cues across sensory modalities. Despite initial evidence for anxiety-related biases in multisensory processing of emotional information, there is no research to date that directly addresses whether the mechanism of multisensory integration is altered by anxiety. Here, we compared audiovisual integration of emotional cues between individuals with low vs. high trait anxiety. METHODS Participants were 62 young adults who were assessed on their ability to quickly and accurately identify happy, angry and sad emotions from dynamic visual-only, audio-only and audiovisual face and voice displays. RESULTS The results revealed that individuals in the high anxiety group were more likely to integrate angry faces and voices in a statistically optimal fashion, as predicted by the Maximum Likelihood Estimation model, compared to low anxiety individuals. This means that high anxiety individuals achieved higher precision in correctly recognising anger from angry audiovisual stimuli compared to angry face or voice-only stimuli, and compared to low anxiety individuals. LIMITATIONS We tested a higher proportion of females, and although this does reflect the higher prevalence of clinical anxiety among females in the general population, potential sex differences in multisensory mechanisms due to anxiety should be examined in future studies. CONCLUSIONS Individuals with high trait anxiety have multisensory mechanisms that are especially fine-tuned for processing threat-related emotions. This bias may exhaust capacity for processing of other emotional stimuli and lead to overly negative evaluations of social interactions.
Collapse
Affiliation(s)
- Naomi Heffer
- University of Bath, Department of Psychology, UK.
| | | | - Anke Karl
- University of Exeter, Mood Disorders Centre, UK
| | - Chris Ashwin
- University of Bath, Department of Psychology, UK
| | | |
Collapse
|
23
|
Kim G, Seong SH, Hong SS, Choi E. Impact of face masks and sunglasses on emotion recognition in South Koreans. PLoS One 2022; 17:e0263466. [PMID: 35113970 PMCID: PMC8812856 DOI: 10.1371/journal.pone.0263466] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/19/2022] [Indexed: 01/30/2023] Open
Abstract
Due to the prolonged COVID-19 pandemic, wearing masks has become essential for social interaction, disturbing emotion recognition in daily life. In the present study, a total of 39 Korean participants (female = 20, mean age = 24.2 years) inferred seven emotions (happiness, surprise, fear, sadness, disgust, anger, surprise, and neutral) from uncovered, mask-covered, sunglasses-covered faces. The recognition rates were the lowest under mask conditions, followed by the sunglasses and uncovered conditions. In identifying emotions, different emotion types were associated with different areas of the face. Specifically, the mouth was the most critical area for happiness, surprise, sadness, disgust, and anger recognition, but fear was most recognized from the eyes. By simultaneously comparing faces with different parts covered, we were able to more accurately examine the impact of different facial areas on emotion recognition. We discuss the potential cultural differences and the ways in which individuals can cope with communication in which facial expressions are paramount.
Collapse
Affiliation(s)
- Garam Kim
- School of Psychology, Korea University, Sungbuk-gu, Seoul, South Korea
| | - So Hyun Seong
- School of Psychology, Korea University, Sungbuk-gu, Seoul, South Korea
| | - Seok-Sung Hong
- Department of IT Psychology, Ajou University, Yeongtong-gu, Suwon, South Korea
| | - Eunsoo Choi
- School of Psychology, Korea University, Sungbuk-gu, Seoul, South Korea
- * E-mail:
| |
Collapse
|
24
|
Pell MD, Sethi S, Rigoulot S, Rothermich K, Liu P, Jiang X. Emotional voices modulate perception and predictions about an upcoming face. Cortex 2022; 149:148-164. [DOI: 10.1016/j.cortex.2021.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/15/2021] [Accepted: 01/05/2022] [Indexed: 11/26/2022]
|
25
|
Lange EB, Fünderich J, Grimm H. Multisensory integration of musical emotion perception in singing. PSYCHOLOGICAL RESEARCH 2022; 86:2099-2114. [PMID: 35001181 PMCID: PMC9470688 DOI: 10.1007/s00426-021-01637-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 12/16/2021] [Indexed: 11/25/2022]
Abstract
We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.
Collapse
Affiliation(s)
- Elke B Lange
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.
| | - Jens Fünderich
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.,University of Erfurt, Erfurt, Germany
| | - Hartmut Grimm
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany
| |
Collapse
|
26
|
Andermann M, Izurieta Hidalgo NA, Rupp A, Schmahl C, Herpertz SC, Bertsch K. Behavioral and neurophysiological correlates of emotional face processing in borderline personality disorder: are there differences between men and women? Eur Arch Psychiatry Clin Neurosci 2022; 272:1583-1594. [PMID: 35661904 PMCID: PMC9653371 DOI: 10.1007/s00406-022-01434-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 05/15/2022] [Indexed: 11/28/2022]
Abstract
Emotional dysregulation is a core feature of borderline personality disorder (BPD); it is, for example, known to influence one's ability to read other people's facial expressions. We investigated behavioral and neurophysiological foundations of emotional face processing in individuals with BPD and in healthy controls, taking participants' sex into account. 62 individuals with BPD (25 men, 37 women) and 49 healthy controls (20 men, 29 women) completed an emotion classification task with faces depicting blends of angry and happy expressions while the electroencephalogram was recorded. The cortical activity (late positive potential, P3/LPP) was evaluated using source modeling. Compared to healthy controls, individuals with BPD responded slower to happy but not to angry faces; further, they showed more anger ratings in happy but not in angry faces, especially in those with high ambiguity. Men had lower anger ratings than women and responded slower to angry but not happy faces. The P3/LPP was larger in healthy controls than in individuals with BPD, and larger in women than in men; moreover, women but not men produced enlarged P3/LPP responses to angry vs. happy faces. Sex did not interact with behavioral or P3/LPP-related differences between healthy controls and individuals with BPD. Together, BPD-related alterations in behavioral and P3/LPP correlates of emotional face processing exist in both men and women, supposedly without sex-related interactions. Results point to a general 'negativity bias' in women. Source modeling is well suited to investigate effects of participant and stimulus characteristics on the P3/LPP generators.
Collapse
Affiliation(s)
- Martin Andermann
- Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Natalie A. Izurieta Hidalgo
- Department for General Psychiatry, Center of Psychosocial Medicine, Heidelberg University Hospital, Heidelberg, Germany ,School of Medicine, Universidad San Francisco de Quito, Quito, Pichincha Ecuador
| | - André Rupp
- Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Christian Schmahl
- Department of Psychosomatic Medicine and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Sabine C. Herpertz
- Department for General Psychiatry, Center of Psychosocial Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Katja Bertsch
- Department for General Psychiatry, Center of Psychosocial Medicine, Heidelberg University Hospital, Heidelberg, Germany. .,Department of Psychology, Ludwig-Maximilians-University Munich, Leopoldstr. 13, 80802, Munich, Germany. .,NeuroImaging Core Unit Munich (NICUM), University Hospital LMU, Munich, Germany.
| |
Collapse
|
27
|
Effects of integration of facial expression and emotional voice on inhibition of return. ACTA PSYCHOLOGICA SINICA 2022. [DOI: 10.3724/sp.j.1041.2022.00331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
28
|
Barabanschikov V, Suvorova E. Part-Whole Perception of Audiovideoimages of Multimodal Emotional States of a Person. EXPERIMENTAL PSYCHOLOGY (RUSSIA) 2022. [DOI: 10.17759/exppsy.2022150401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
<p>The patterns of perception of a part and a whole of multimodal emotional dynamic states of people unfamiliar to observers are studied. Audio-video clips of fourteen key emotional states expressed by specially trained actors were randomly presented to two groups of observers. In one group (N=96, average age — 34, SD — 9.4l.), each audio—video image was shown in full, in the other (N=78, average age — 25, SD — 9.6l.), it was divided into two parts of equal duration from the beginning to the conditional middle (short phonetic pause) and from the middle to the end of the exposure. The stimulus material contained facial expressions, gestures, head and eye movements, changes in the position of the body of the sitters, who voiced pseudolinguistic statements accompanied by affective intonations. The accuracy of identification and the structure of categorical fields were evaluated depending on the modality and form (whole/part) of the exposure of affective states. After the exposure of each audio-video image from the presented list of emotions, observers were required to choose the one that best corresponds to what they saw. According to the data obtained, the accuracy of identifying the emotions of the initial and final fragments of audio-video images practically coincide, but significantly less than with full exposure. Functional differences in the perception of fragmented audio-video images of the same emotional states are revealed. The modes of transitions from the initial stage to the final one and the conditions affecting the relative speed of the perceptual process are shown. The uneven formation of the information basis of multimodal expressions and the heterochronous perceptogenesis of emotional states of actors are demonstrated.</p>
Collapse
|
29
|
Nussbaum C, von Eiff CI, Skuk VG, Schweinberger SR. Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency. Cognition 2021; 219:104967. [PMID: 34875400 DOI: 10.1016/j.cognition.2021.104967] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 10/22/2021] [Accepted: 11/23/2021] [Indexed: 12/12/2022]
Abstract
While the human perceptual system constantly adapts to the environment, some of the underlying mechanisms are still poorly understood. For instance, although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional voices and ambiguous voices were used as reference conditions. All adaptor stimuli were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of subsequently presented target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptation were much larger than following F0 adaptation and were only marginally smaller than those following full adaptation. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-correspondence between adaptor and target stimuli. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany.
| | - Celina I von Eiff
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Verena G Skuk
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany.
| |
Collapse
|
30
|
Reschke PJ, Walle EA. The Unique and Interactive Effects of Faces, Postures, and Scenes on Emotion Categorization. AFFECTIVE SCIENCE 2021; 2:468-483. [PMID: 36046211 PMCID: PMC9382938 DOI: 10.1007/s42761-021-00061-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 07/21/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED There is ongoing debate as to whether emotion perception is determined by facial expressions or context (i.e., non-facial cues). The present investigation examined the independent and interactive effects of six emotions (anger, disgust, fear, joy, sadness, neutral) conveyed by combinations of facial expressions, bodily postures, and background scenes in a fully crossed design. Participants viewed each face-posture-scene (FPS) combination for 5 s and were then asked to categorize the emotion depicted in the image. Four key findings emerged from the analyses: (1) For fully incongruent FPS combinations, participants categorized images using the face in 61% of instances and the posture and scene in 18% and 11% of instances, respectively; (2) postures (with neutral scenes) and scenes (with neutral postures) exerted differential influences on emotion categorizations when combined with incongruent facial expressions; (3) contextual asymmetries were observed for some incongruent face-posture pairings and their inverse (e.g., anger-fear vs. fear-anger), but not for face-scene pairings; (4) finally, scenes exhibited a boosting effect of posture when combined with a congruent posture and attenuated the effect of posture when combined with a congruent face. Overall, these findings highlight independent and interactional roles of posture and scene in emotion face perception. Theoretical implications for the study of emotions in context are discussed. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s42761-021-00061-x.
Collapse
Affiliation(s)
- Peter J. Reschke
- School of Family Life, Brigham Young University, Provo, UT 84602 USA
| | | |
Collapse
|
31
|
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing. Neurosci Biobehav Rev 2021; 132:304-323. [PMID: 34861296 DOI: 10.1016/j.neubiorev.2021.11.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 11/24/2021] [Accepted: 11/24/2021] [Indexed: 11/21/2022]
Abstract
This review summarizes human perception and processing of face and gaze signals. Face and gaze signals are important means of non-verbal social communication. The review highlights that: (1) some evidence is available suggesting that the perception and processing of facial information starts in the prenatal period; (2) the perception and processing of face identity, expression and gaze direction is highly context specific, the effect of race and culture being a case in point. Culture affects by means of experiential shaping and social categorization the way in which information on face and gaze is collected and perceived; (3) face and gaze processing occurs in the so-called 'social brain'. Accumulating evidence suggests that the processing of facial identity, facial emotional expression and gaze involves two parallel and interacting pathways: a fast and crude subcortical route and a slower cortical pathway. The flow of information is bi-directional and includes bottom-up and top-down processing. The cortical networks particularly include the fusiform gyrus, superior temporal sulcus (STS), intraparietal sulcus, temporoparietal junction and medial prefrontal cortex.
Collapse
|
32
|
Chen T, Sun Y, Feng C, Feng W. In Identifying the Source of the Incongruent Effect. J PSYCHOPHYSIOL 2021. [DOI: 10.1027/0269-8803/a000290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Abstract. Emotional signals from the face and body are normally perceived as an integrated whole in everyday life. Previous studies have revealed an incongruent effect which refers to distinctive behavioral and neural responses to emotionally congruent versus incongruent face-body compounds. However, it remains unknown which kind of the face-body compounds caused the incongruence effect. In the present study, we added neutral face and neutral body stimuli to form new face-body compounds. Forty subjects with normal or corrected-to-normal vision participated in this experiment. By comparing the face-body compounds with emotional conflict and face-body compounds with neutral stimuli, we could investigate the source of the incongruent effect. For both behavioral and event-related potential (ERP) data, a 2 (bodily expression: happiness, fear) × 2 (congruence: congruent, incongruent) repeated-measure analysis of variance (ANOVA) was performed to re-investigate the incongruent effect and a 3 (facial expression: fearful, happy, neutral) × 3 (bodily expression: fearful, happy, neutral) repeated-measure ANOVA was performed to clarify the source of the incongruent effect. As expected, both behavioral and ERP results have successfully repeated the incongruent effect. Specifically, the behavioral data showed that emotionally congruent versus incongruent face-body compounds were recognized more accurately ( p < .05). The ERP component of N2 was modulated by the emotional congruency between the facial and bodily expression showing that the emotionally incongruent compounds elicited greater N2 amplitudes than emotionally congruent compounds ( p < .05). No incongruent effect was found for P1 or P3 component ( p = .079, p = .99, respectively). Furthermore, by comparing the emotionally incongruent pairs with the neutral baseline, the present study suggests that the source of the incongruent effect might be from the happy face-fearful body compounds. We speculate that the emotion expressed by the fearful body was much more intensive than the emotion expressed by the happy body and thus caused a stronger interference in judging the facial expressions.
Collapse
Affiliation(s)
- Tingji Chen
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Yanting Sun
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| |
Collapse
|
33
|
Barrick EM, Thornton MA, Tamir DI. Mask exposure during COVID-19 changes emotional face processing. PLoS One 2021; 16:e0258470. [PMID: 34637454 PMCID: PMC8509869 DOI: 10.1371/journal.pone.0258470] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 09/28/2021] [Indexed: 11/19/2022] Open
Abstract
Faces are one of the key ways that we obtain social information about others. They allow people to identify individuals, understand conversational cues, and make judgements about others' mental states. When the COVID-19 pandemic hit the United States, widespread mask-wearing practices were implemented, causing a shift in the way Americans typically interact. This introduction of masks into social exchanges posed a potential challenge-how would people make these important inferences about others when a large source of information was no longer available? We conducted two studies that investigated the impact of mask exposure on emotion perception. In particular, we measured how participants used facial landmarks (visual cues) and the expressed valence and arousal (affective cues), to make similarity judgements about pairs of emotion faces. Study 1 found that in August 2020, participants with higher levels of mask exposure used cues from the eyes to a greater extent when judging emotion similarity than participants with less mask exposure. Study 2 measured participants' emotion perception in both April and September 2020 -before and after widespread mask adoption-in the same group of participants to examine changes in the use of facial cues over time. Results revealed an overall increase in the use of visual cues from April to September. Further, as mask exposure increased, people with the most social interaction showed the largest increase in the use of visual facial cues. These results provide evidence that a shift has occurred in how people process faces such that the more people are interacting with others that are wearing masks, the more they have learned to focus on visual cues from the eye area of the face.
Collapse
Affiliation(s)
- Elyssa M. Barrick
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, United States of America
| | - Diana I. Tamir
- Department of Psychology, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|
34
|
Paraverbal Expression of Verbal Irony: Vocal Cues Matter and Facial Cues Even More. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00385-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
35
|
Gao C, Wedell DH, Shinkareva SV. Evaluating non-affective cross-modal congruence effects on emotion perception. Cogn Emot 2021; 35:1634-1651. [PMID: 34486494 DOI: 10.1080/02699931.2021.1973966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Although numerous studies have shown that people are more likely to integrate consistent visual and auditory signals, the role of non-affective congruence in emotion perception is unclear. This registered report examined the influence of non-affective cross-modal congruence on emotion perception. In Experiment 1, non-affective congruence was manipulated by matching or mismatching gender between visual and auditory modalities. Participants were instructed to attend to emotion information from only one modality while ignoring the other modality. Experiment 2 tested the inverse effectiveness rule by including both noise and noiseless conditions. Across two experiments, we found the effects of task-irrelevant emotional signals from one modality on emotional perception in the other modality, reflected in affective congruence, facilitation, and affective incongruence effects. The effects were stronger for the attend-auditory compared to the attend-visual condition, supporting a visual dominance effect. The effects were stronger for the noise compared to the noiseless condition, consistent with the inverse effectiveness rule. We did not find evidence for the effects of non-affective congruence on audiovisual integration of emotion across two experiments, suggesting that audiovisual integration of emotion may not require automatic integration of non-affective congruence information.
Collapse
Affiliation(s)
- Chuanji Gao
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| | - Douglas H Wedell
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| | - Svetlana V Shinkareva
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| |
Collapse
|
36
|
Sumioka H, Yamato N, Shiomi M, Ishiguro H. A Minimal Design of a Human Infant Presence: A Case Study Toward Interactive Doll Therapy for Older Adults With Dementia. Front Robot AI 2021; 8:633378. [PMID: 34222346 PMCID: PMC8247474 DOI: 10.3389/frobt.2021.633378] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/21/2021] [Indexed: 12/01/2022] Open
Abstract
We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.
Collapse
Affiliation(s)
- Hidenobu Sumioka
- Advanced Telecommunications Research Institute International, Kyoto, Japan
| | - Nobuo Yamato
- Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | - Masahiro Shiomi
- Advanced Telecommunications Research Institute International, Kyoto, Japan
| | - Hiroshi Ishiguro
- Advanced Telecommunications Research Institute International, Kyoto, Japan.,Graduate School of Engineering Science, Osaka University, Osaka, Japan
| |
Collapse
|
37
|
Brambilla M, Masi M, Mattavelli S, Biella M. Faces and Sounds Becoming One: Cross-Modal Integration of Facial and Auditory Cues in Judging Trustworthiness. SOCIAL COGNITION 2021. [DOI: 10.1521/soco.2021.39.3.315] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Face processing has mainly been investigated by presenting facial expressions without any contextual information. However, in everyday interactions with others, the sight of a face is often accompanied by contextual cues that are processed either visually or under different sensory modalities. Here, we tested whether the perceived trustworthiness of a face is influenced by the auditory context in which that face is embedded. In Experiment 1, participants evaluated trustworthiness from faces that were surrounded by either threatening or non-threatening auditory contexts. Results showed that faces were judged more untrustworthy when accompanied by threatening auditory information. Experiment 2 replicated the effect in a design that disentangled the effects of threatening contexts from negative contexts in general. Thus, perceiving facial trustworthiness involves a cross-modal integration of the face and the level of threat posed by the surrounding context.
Collapse
|
38
|
Kawahara M, Sauter DA, Tanaka A. Culture shapes emotion perception from faces and voices: changes over development. Cogn Emot 2021; 35:1175-1186. [PMID: 34000966 DOI: 10.1080/02699931.2021.1922361] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The perception of multisensory emotion cues is affected by culture. For example, East Asians rely more on vocal, as compared to facial, affective cues compared to Westerners. However, it is unknown whether these cultural differences exist in childhood, and if not, which processing style is exhibited in children. The present study tested East Asian and Western children, as well as adults from both cultural backgrounds, to probe cross-cultural similarities and differences at different ages, and to establish the weighting of each modality at different ages. Participants were simultaneously shown a face and a voice expressing either congruent or incongruent emotions, and were asked to judge whether the person was happy or angry. Replicating previous research, East Asian adults relied more on vocal cues than did Western adults. Young children from both cultural groups, however, behaved like Western adults, relying primarily on visual information. The proportion of responses based on vocal cues increased with age in East Asian, but not Western, participants. These results suggest that culture is an important factor in developmental changes in the perception of facial and vocal affective information.
Collapse
Affiliation(s)
- Misako Kawahara
- Department of Psychology, Tokyo Woman's Christian University, Tokyo, Japan.,Kojimachi Business Center Building, Japan Society for the Promotion of Science, Tokyo, Japan
| | - Disa A Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | - Akihiro Tanaka
- Department of Psychology, Tokyo Woman's Christian University, Tokyo, Japan
| |
Collapse
|
39
|
Heffer N, Karl A, Jicol C, Ashwin C, Petrini K. Anxiety biases audiovisual processing of social signals. Behav Brain Res 2021; 410:113346. [PMID: 33964354 DOI: 10.1016/j.bbr.2021.113346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 04/20/2021] [Accepted: 05/03/2021] [Indexed: 02/08/2023]
Abstract
In everyday life, information from multiple senses is integrated for a holistic understanding of emotion. Despite evidence of atypical multisensory perception in populations with socio-emotional difficulties (e.g., autistic individuals), little research to date has examined how anxiety impacts on multisensory emotion perception. Here we examined whether the level of trait anxiety in a sample of 56 healthy adults affected audiovisual processing of emotion for three types of stimuli: dynamic faces and voices, body motion and dialogues of two interacting agents, and circles and tones. Participants judged emotion from four types of displays - audio-only, visual-only, audiovisual congruent (e.g., angry face and angry voice) and audiovisual incongruent (e.g., angry face and happy voice) - as happy or angry, as quickly as possible. In one task, participants based their emotional judgements on information in one modality while ignoring information in the other, and in a second task they based their judgements on their overall impressions of the stimuli. The results showed that the higher trait anxiety group prioritized the processing of angry cues when combining faces and voices that portrayed conflicting emotions. Individuals in this group were also more likely to benefit from combining congruent face and voice cues when recognizing anger. The multisensory effects of anxiety were found to be independent of the effects of autistic traits. The observed effects of trait anxiety on multisensory processing of emotion may serve to maintain anxiety by increasing sensitivity to social-threat and thus contributing to interpersonal difficulties.
Collapse
Affiliation(s)
- Naomi Heffer
- University of Bath, Department of Psychology, United Kingdom.
| | - Anke Karl
- University of Exeter, Mood Disorders Centre, United Kingdom
| | - Crescent Jicol
- University of Bath, Department of Psychology, United Kingdom
| | - Chris Ashwin
- University of Bath, Department of Psychology, United Kingdom
| | - Karin Petrini
- University of Bath, Department of Psychology, United Kingdom
| |
Collapse
|
40
|
Li Y, Li Z, Deng A, Zheng H, Chen J, Ren Y, Yang W. The Modulation of Exogenous Attention on Emotional Audiovisual Integration. Iperception 2021; 12:20416695211018714. [PMID: 34104384 PMCID: PMC8167015 DOI: 10.1177/20416695211018714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 04/29/2021] [Indexed: 11/15/2022] Open
Abstract
Although emotional audiovisual integration has been investigated previously, whether emotional audiovisual integration is affected by the spatial allocation of visual attention is currently unknown. To examine this question, a variant of the exogenous spatial cueing paradigm was adopted, in which stimuli varying by facial expressions and nonverbal affective prosody were used to express six basic emotions (happiness, anger, disgust, sadness, fear, surprise) via a visual, an auditory, or an audiovisual modality. The emotional stimuli were preceded by an unpredictive cue that was used to attract participants' visual attention. The results showed significantly higher accuracy and quicker response times in response to bimodal audiovisual stimuli than to unimodal visual or auditory stimuli for emotional perception under both valid and invalid cue conditions. The auditory facilitation effect was stronger than the visual facilitation effect under exogenous attention for the six emotions tested. Larger auditory enhancement was induced when the target was presented at the expected location than at the unexpected location. For emotional perception, happiness shared the biggest auditory enhancement among all six emotions. However, the influence of exogenous cueing effect on emotional perception seemed to be absent.
Collapse
Affiliation(s)
- Yueying Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China; Graduate School of Humanities, Kobe University, Japan
| | | | | | | | - Jianxin Chen
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, Medical Humanities College, Guiyang College of Traditional Chinese Medicine, Guiyang, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China; Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
41
|
Liang P, Jiang J, Chen J, Wei L. Affective Face Processing Modified by Different Tastes. Front Psychol 2021; 12:644704. [PMID: 33790842 PMCID: PMC8006344 DOI: 10.3389/fpsyg.2021.644704] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/15/2021] [Indexed: 11/13/2022] Open
Abstract
Facial emotional recognition is something used often in our daily lives. How does the brain process the face search? Can taste modify such a process? This study employed two tastes (sweet and acidic) to investigate the cross-modal interaction between taste and emotional face recognition. The behavior responses (reaction time and correct response ratios) and the event-related potential (ERP) were applied to analyze the interaction between taste and face processing. Behavior data showed that when detecting a negative target face with a positive face as a distractor, the participants perform the task faster with an acidic taste than with sweet. No interaction effect was observed with correct response ratio analysis. The early (P1, N170) and mid-stage [early posterior negativity (EPN)] components have shown that sweet and acidic tastes modified the ERP components with the affective face search process in the ERP results. No interaction effect was observed in the late-stage (LPP) component. Our data have extended the understanding of the cross-modal mechanism and provided electrophysiological evidence that affective facial processing could be influenced by sweet and acidic tastes.
Collapse
Affiliation(s)
- Pei Liang
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China.,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Hubei, China
| | - Jiayu Jiang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Liaoning, China.,School of Fundamental Sciences, China Medical University, Shenyang, China
| | - Jie Chen
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China
| | - Liuqing Wei
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China.,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Hubei, China
| |
Collapse
|
42
|
Usler ER, Weber C. Emotion processing in children who do and do not stutter: An ERP study of electrocortical reactivity and regulation to peer facial expressions. JOURNAL OF FLUENCY DISORDERS 2021; 67:105802. [PMID: 33227619 DOI: 10.1016/j.jfludis.2020.105802] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 10/29/2020] [Accepted: 10/30/2020] [Indexed: 06/11/2023]
Abstract
PURPOSE Event-related brain potentials (ERPs) were used to investigate the neural correlates of emotion processing in 5- to 8-year-old children who do and do not stutter. METHODS Participants were presented with an audio contextual cue followed by images of threatening (angry/fearful) and neutral facial expressions from similarly aged peers. Three conditions differed in audio-image pairing: neutral context-neutral expression (neutral condition), negative context-threatening expression (threat condition), and reappraisal context-threatening expression (reappraisal condition). These conditions reflected social stimuli that are ecologically valid to the everyday life of children. RESULTS P100, N170, and late positive potential (LPP) ERP components were elicited over parietal and occipital electrodes. The threat condition elicited an increased LPP mean amplitude compared to the neutral condition across our participants, suggesting increased emotional reactivity to threatening facial expressions. In addition, LPP amplitude decreased during the reappraisal condition- evidence of emotion regulation. No group differences were observed in the mean amplitude of ERP components between children who do and do not stutter. Furthermore, dimensions of childhood temperament and stuttering severity were not strongly correlated with LPP elicitation. CONCLUSION These findings are suggestive that, at this young age, children who stutter exhibit typical brain activation underlying emotional reactivity and regulation to social threat from peer facial expressions.
Collapse
Affiliation(s)
- Evan R Usler
- Department of Communication Sciences and Disorders, College of Health Sciences, University of Delaware, 100 Discovery Blvd., Newark, DE, 19713, United States.
| | - Christine Weber
- Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN, 47907, United States
| |
Collapse
|
43
|
Liu P, Rigoulot S, Jiang X, Zhang S, Pell MD. Unattended Emotional Prosody Affects Visual Processing of Facial Expressions in Mandarin-Speaking Chinese: A Comparison With English-Speaking Canadians. JOURNAL OF CROSS-CULTURAL PSYCHOLOGY 2021; 52:275-294. [PMID: 33958813 PMCID: PMC8053741 DOI: 10.1177/0022022121990897] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Emotional cues from different modalities have to be integrated during communication, a process that can be shaped by an individual’s cultural background. We explored this issue in 25 Chinese participants by examining how listening to emotional prosody in Mandarin influenced participants’ gazes at emotional faces in a modified visual search task. We also conducted a cross-cultural comparison between data of this study and that of our previous work in English-speaking Canadians using analogous methodology. In both studies, eye movements were recorded as participants scanned an array of four faces portraying fear, anger, happy, and neutral expressions, while passively listening to a pseudo-utterance expressing one of the four emotions (Mandarin utterance in this study; English utterance in our previous study). The frequency and duration of fixations to each face were analyzed during 5 seconds after the onset of faces, both during the presence of the speech (early time window) and after the utterance ended (late time window). During the late window, Chinese participants looked more frequently and longer at faces conveying congruent emotions as the speech, consistent with findings from English-speaking Canadians. Cross-cultural comparison further showed that Chinese, but not Canadians, looked more frequently and longer at angry faces, which may signal potential conflicts and social threats. We hypothesize that the socio-cultural norms related to harmony maintenance in the Eastern culture promoted Chinese participants’ heightened sensitivity to, and deeper processing of, angry cues, highlighting culture-specific patterns in how individuals scan their social environment during emotion processing.
Collapse
Affiliation(s)
- Pan Liu
- McGill University, Montréal, QC, Canada.,Western University, London, ON, Canada
| | - Simon Rigoulot
- McGill University, Montréal, QC, Canada.,Université du Québec à Trois-Rivières, QC, Canada
| | - Xiaoming Jiang
- McGill University, Montréal, QC, Canada.,Tongji University, Shanghai, China
| | | | | |
Collapse
|
44
|
Abstract
The present study examined the relationship between multisensory integration and the temporal binding window (TBW) for multisensory processing in adults with Autism spectrum disorder (ASD). The ASD group was less likely than the typically developing group to perceive an illusory flash induced by multisensory integration during a sound-induced flash illusion (SIFI) task. Although both groups showed comparable TBWs during the multisensory temporal order judgment task, correlation analyses and Bayes factors provided moderate evidence that the reduced SIFI susceptibility was associated with the narrow TBW in the ASD group. These results suggest that the individuals with ASD exhibited atypical multisensory integration and that individual differences in the efficacy of this process might be affected by the temporal processing of multisensory information.
Collapse
|
45
|
The relationship between vocal affect recognition and psychosocial functioning for people with moderate to severe traumatic brain injury: a systematic review. BRAIN IMPAIR 2021. [DOI: 10.1017/brimp.2020.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractThe purpose of this review was to explore how vocal affect recognition deficits impact the psychosocial functioning of people with moderate to severe traumatic brain injury (TBI). A systematic review following the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines was conducted, whereby six databases were searched, with additional hand searching of key journals also completed. The search identified 1847 records after duplicates were removed, and 1749 were excluded through title and abstract screening. After full text screening of 65 peer-reviewed articles published between January 1999 and August 2019, only five met inclusion criteria. The methodological quality of selected studies was assessed using the Mixed Methods Appraisal Tool (MMAT) Version 2018 with a fair level of agreement reached. A narrative synthesis of the results was completed, exploring vocal affect recognition and psychosocial functioning of people with moderate to severe TBI, including aspects of social cognition (i.e., empathy; Theory of Mind) and social behaviour. Results of the review were limited by a paucity of research in this area, a lack of high-level evidence, and wide variation in the outcome measures used. More rigorous study designs are required to establish more conclusive evidence regarding the degree and direction of the association between vocal affect recognition and aspects of psychosocial functioning. This review is registered with Prospero.
Collapse
|
46
|
Lu T, Yang J, Zhang X, Guo Z, Li S, Yang W, Chen Y, Wu N. Crossmodal Audiovisual Emotional Integration in Depression: An Event-Related Potential Study. Front Psychiatry 2021; 12:694665. [PMID: 34354614 PMCID: PMC8329241 DOI: 10.3389/fpsyt.2021.694665] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 06/21/2021] [Indexed: 11/16/2022] Open
Abstract
Depression is related to the defect of emotion processing, and people's emotional processing is crossmodal. This article aims to investigate whether there is a difference in audiovisual emotional integration between the depression group and the normal group using a high-resolution event-related potential (ERP) technique. We designed a visual and/or auditory detection task. The behavioral results showed that the responses to bimodal audiovisual stimuli were faster than those to unimodal auditory or visual stimuli, indicating that crossmodal integration of emotional information occurred in both the depression and normal groups. The ERP results showed that the N2 amplitude induced by sadness was significantly higher than that induced by happiness. The participants in the depression group showed larger amplitudes of N1 and P2, and the average amplitude of LPP evoked in the frontocentral lobe in the depression group was significantly lower than that in the normal group. The results indicated that there are different audiovisual emotional processing mechanisms between depressed and non-depressed college students.
Collapse
Affiliation(s)
- Ting Lu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jingjing Yang
- School of Artificial Intelligence, Changchun University of Science and Technology, Changchun, China
| | - Xinyu Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zihan Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Chen
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Nannan Wu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
47
|
Jeong JW, Kim HT, Lee SH, Lee H. Effects of an Audiovisual Emotion Perception Training for Schizophrenia: A Preliminary Study. Front Psychiatry 2021; 12:522094. [PMID: 34025462 PMCID: PMC8131526 DOI: 10.3389/fpsyt.2021.522094] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 03/18/2021] [Indexed: 11/13/2022] Open
Abstract
Individuals with schizophrenia show a reduced ability to integrate facial and vocal information in emotion perception. Although emotion perception has been a target for treatment, no study has yet examined the effect of multimodal training on emotion perception in schizophrenia. In the present study, we developed an audiovisual emotion perception training and test in which a voice and a face were simultaneously presented, and subjects were asked to judge whether the emotions of the voice and the face matched. The voices were either angry or happy, and the faces were morphed on a continuum ranging from angry to happy. Sixteen patients with schizophrenia participated in six training sessions and three test sessions (i.e., pre-training, post-training, and generalization). Eighteen healthy controls participated only in pre-training test session. Prior to training, the patients with schizophrenia performed significantly worse than did the controls in the recognition of anger; however, following the training, the patients showed a significant improvement in recognizing anger, which was maintained and generalized to a new set of stimuli. The patients also improved the recognition of happiness following the training, but this effect was not maintained or generalized. These results provide preliminary evidence that a multimodal, audiovisual training may yield improvements in anger perception for patients with schizophrenia.
Collapse
Affiliation(s)
- Ji Woon Jeong
- Department of Psychology, Korea University, Seoul, South Korea
| | - Hyun Taek Kim
- Department of Psychology, Korea University, Seoul, South Korea
| | - Seung-Hwan Lee
- Department of Psychiatry, Ilsan-Paik Hospital, Inje University, Goyang, South Korea
| | - Hyejeen Lee
- Department of Psychology, Chonnam National University, Gwangju, South Korea
| |
Collapse
|
48
|
Spinelli M, Aureli T, Coppola G, Ponzetti S, Lionetti F, Scialpi V, Fasolo M. Verbal - prosodic association when narrating early caregiving experiences during the adult attachment interview: differences between secure and dismissing individuals. Attach Hum Dev 2020; 24:93-114. [PMID: 33346702 DOI: 10.1080/14616734.2020.1860348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Previous studies reported an inconsistency between verbal extracts and emotional physiological activation in dismissing individuals when narrating their early caregiving experience at the Adult Attachment Interview (AAI). This study aimed to explore this discrepancy by analyzing the degree of concordance between verbal content and prosodic characteristics, index of physiological activation, when dismissing and secure individuals discuss negative childhood memories during the AAI. Results showed that secure participants presented a high coherence between verbal content and emotional activation, as expressed by prosody, revealing a reprocess of negative experiences that is the core feature of the development of secure working models. In contrast, dismissing participants' prosodic characteristics were discrepant with the verbal content. These individuals downplayed the nature and impact of negative experiences and emotions, but used a prosody that revealed a high emotional arousal. The difference between the two groups was more evident for participants who had experienced more rejecting parents.
Collapse
Affiliation(s)
- Maria Spinelli
- Department of Neurosciences, Imaging and Clinical Sciences, University G. D'Annunzio Chieti-Pescara, Chieti, Italy
| | - Tiziana Aureli
- Department of Neurosciences, Imaging and Clinical Sciences, University G. D'Annunzio Chieti-Pescara, Chieti, Italy
| | - Gabrielle Coppola
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Silvia Ponzetti
- Department of Neurosciences, Imaging and Clinical Sciences, University G. D'Annunzio Chieti-Pescara, Chieti, Italy
| | - Francesca Lionetti
- Department of Neurosciences, Imaging and Clinical Sciences, University G. D'Annunzio Chieti-Pescara, Chieti, Italy
| | - Valentina Scialpi
- Department of Neurosciences, Imaging and Clinical Sciences, University G. D'Annunzio Chieti-Pescara, Chieti, Italy
| | - Mirco Fasolo
- Department of Neurosciences, Imaging and Clinical Sciences, University G. D'Annunzio Chieti-Pescara, Chieti, Italy
| |
Collapse
|
49
|
Poncet F, Leleu A, Rekow D, Damon F, Durand K, Schaal B, Baudouin JY. Odor-evoked hedonic contexts influence the discrimination of facial expressions in the human brain. Biol Psychol 2020; 158:108005. [PMID: 33290848 DOI: 10.1016/j.biopsycho.2020.108005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 11/30/2020] [Accepted: 12/01/2020] [Indexed: 10/22/2022]
Abstract
The influence of odor valence on expressive-face perception remains unclear. Here, three "valenced" odor contexts (pleasant, unpleasant, control) were diffused while scalp electroencephalogram (EEG) was recorded in 18 participants presented with expressive faces alternating at a 6-Hz rate. One facial expression (happiness, disgust or neutrality) repeatedly arose every 6 face pictures to isolate its discrimination from other expressions at 1 Hz and harmonics in the EEG spectrum. The amplitude of the brain response to neutrality was larger in the pleasant vs. control odor context, and fewer electrodes responded in the unpleasant odor context. The number of responding electrodes was reduced for disgust in both odor contexts. The response to happiness was unchanged between odor conditions. Overall, these observations suggest that valenced odors influence the neural discrimination of facial expressions depending on both face and odor hedonic valence, especially for the emotionally ambiguous neutral expression.
Collapse
Affiliation(s)
- Fanny Poncet
- Developmental Ethology and Cognitive Psychology Group, Centre des Sciences du Goût et de l'Alimentation, UMR 6265 CNRS-Université Bourgogne Franche-Comté-Inrae-AgroSup, Dijon, France.
| | - Arnaud Leleu
- Developmental Ethology and Cognitive Psychology Group, Centre des Sciences du Goût et de l'Alimentation, UMR 6265 CNRS-Université Bourgogne Franche-Comté-Inrae-AgroSup, Dijon, France.
| | - Diane Rekow
- Developmental Ethology and Cognitive Psychology Group, Centre des Sciences du Goût et de l'Alimentation, UMR 6265 CNRS-Université Bourgogne Franche-Comté-Inrae-AgroSup, Dijon, France
| | - Fabrice Damon
- Developmental Ethology and Cognitive Psychology Group, Centre des Sciences du Goût et de l'Alimentation, UMR 6265 CNRS-Université Bourgogne Franche-Comté-Inrae-AgroSup, Dijon, France
| | - Karine Durand
- Developmental Ethology and Cognitive Psychology Group, Centre des Sciences du Goût et de l'Alimentation, UMR 6265 CNRS-Université Bourgogne Franche-Comté-Inrae-AgroSup, Dijon, France
| | - Benoist Schaal
- Developmental Ethology and Cognitive Psychology Group, Centre des Sciences du Goût et de l'Alimentation, UMR 6265 CNRS-Université Bourgogne Franche-Comté-Inrae-AgroSup, Dijon, France
| | - Jean-Yves Baudouin
- Laboratoire "Développement, Individu, Processus, Handicap, Éducation" (DIPHE), Department Psychologie du Développement, de l'Éducation et des Vulnérabilités (PsyDÉV), Institut de psychologie, Université de Lyon (Lumière Lyon 2), Bron, France
| |
Collapse
|
50
|
Turner JR, Stanley JT. "We" Before "Me": Differences in Usage of Collectivistic and Individualistic Language Influence Judgments of Electability and Performance. J Gerontol B Psychol Sci Soc Sci 2020; 75:e242-e248. [PMID: 30852612 DOI: 10.1093/geronb/gbz030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVES Older adults are often judged to be warm, but not competent, which contradicts their representation in positions of authority. This study sought to extend evidence of age differences in more individualistic (e.g., "I") and collectivistic (e.g., "we") language and explore their impact on judgments of performance and electability. METHOD Speeches from young and older adults who campaigned for a fictitious position were analyzed using Linguistic and Inquiry Word Count Software. Words fitting specified categories (e.g., pronouns, affect) were compared to outcome judgments obtained from trained coders on the dimensions of performance and electability. RESULTS Older adults used significantly more "we"-language. Young adults used more "I"-language, and more positive affect, achievement, and power language. Language choices and coder judgments were associated such that the more "I"-language that was used during the speech, the less electable the candidate was judged. This relationship was not found for "we"-language. DISCUSSION This study found no evidence for collectivistic language enhancing ratings of electability or performance; however, an age-invariable, negative relationship was obtained between increased individualistic language and reduced coder judgments of electability. This suggests that speakers should minimize "I"-statements to promote electability, a characteristic that is reflected more in older adults' speeches than young.
Collapse
|