1
|
Liu M, Sommer W, Yue S, Li W. Dominance of face over voice in human attractiveness judgments: ERP evidence. Psychophysiology 2023; 60:e14358. [PMID: 37271749 DOI: 10.1111/psyp.14358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 04/27/2023] [Accepted: 05/12/2023] [Indexed: 06/06/2023]
Abstract
The attractiveness of a person, a complex, and socially relevant type of information, is transmitted in many ways, not least through face and voice. However, it is unclear how the stimulus domains carrying attractiveness information interact. The present study explored the audiovisual perception of attractiveness in a Stroop-like paradigm using event-related potentials (ERPs). Participants were presented with face-voice pairs carrying congruent or incongruent attractiveness information and, in turn, judged the attractiveness level of each domain while ignoring the other. Voice attractiveness judgments were influenced by unattended face attractiveness, in terms of both, early perceptual encoding (N170, P200) as well as later evaluative stages (N400, LPC). In contrast, effects of unattended voice attractiveness on face attractiveness judgments were confined to early perceptual encoding (N170). These results demonstrate not only the interaction of multiple domains in human attractiveness perception at different processing stages but also a relative dominance of face over voice attractiveness.
Collapse
Affiliation(s)
- Meng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
| | - Werner Sommer
- Institut für Psychologie, Humboldt-Universität zu Berlin, Berlin, Germany
- Department of Psychology, Zhejiang Normal University, Jinhua, China
- Institute for Creativity, Hong Kong Baptist University, Hong Kong, China
| | - Siqi Yue
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
| |
Collapse
|
2
|
Zhu S, Yang J, Li H, Yuan J. Shared surname enhances our preference to famous people: multimodal EEG evidence. Cogn Neurodyn 2022; 16:1351-1359. [PMID: 36408066 PMCID: PMC9666624 DOI: 10.1007/s11571-022-09784-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 12/12/2021] [Accepted: 01/20/2022] [Indexed: 11/03/2022] Open
Abstract
Multimodal Electroencephalography techniques were used to determine whether the name of famous people undergoes self-relevant processing due to a shared surname with participants. During a three-stimulus oddball task, brain activity was recorded when participants suddenly saw their own names (self-name [SN]), a famous name with the same surname (FNS), or a famous name with a different surname (FND). While familiarity ratings were kept similar across the three kinds of name, behavioral analysis showed a higher rating on self-relevance for SN than for FNS, which, in turn, received a higher rating than FND. P2 amplitudes demonstrated a similar enhancement in response to SN and FNS compared to FND while P3 amplitudes and power of theta band (3.5-6 Hz) oscillation were more pronounced in response to SN than to FNS, which in turn elicited larger P3 and theta activities than FND. These findings, excluding the influence of familiarity, revealed that famous people sharing same surname with us could elicit a reliable self-relevant effect, despite lack of real social connection. This self-relevant processing may be embodied by the P3 amplitude and theta band neural oscillation in EEG.
Collapse
Affiliation(s)
- Siyu Zhu
- The Affect Cognition and Regulation Laboratory (ACRLab), Institute of Brain and Psychological Science, Sichuan Normal University, Chengdu, China
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiemin Yang
- The Affect Cognition and Regulation Laboratory (ACRLab), Institute of Brain and Psychological Science, Sichuan Normal University, Chengdu, China
| | - Hong Li
- Present Address: The Affect Cognition and Regulation Laboratory (ACRLab), Institute of Brain and Psychological Science, Sichuan Normal University, Chengdu, China
- School of Psychology, South China Normal University, Guangzhou, China
| | - Jiajin Yuan
- The Affect Cognition and Regulation Laboratory (ACRLab), Institute of Brain and Psychological Science, Sichuan Normal University, Chengdu, China
| |
Collapse
|
3
|
Eördegh G, Tót K, Kiss Á, Kéri S, Braunitzer G, Nagy A. Multisensory stimuli enhance the effectiveness of equivalence learning in healthy children and adolescents. PLoS One 2022; 17:e0271513. [PMID: 35905111 PMCID: PMC9337650 DOI: 10.1371/journal.pone.0271513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 07/01/2022] [Indexed: 11/18/2022] Open
Abstract
It has been demonstrated earlier in healthy adult volunteers that visually and multisensory (audiovisual) guided equivalence learning are similarly effective. Thus, these processes seem to be independent of stimulus modality. The question arises as to whether this phenomenon can be observed also healthy children and adolescents. To assess this, visual and audiovisual equivalence learning was tested in 157 healthy participants younger than 18 years of age, in both a visual and an audiovisual paradigm consisting of acquisition, retrieval and generalization phases. Performance during the acquisition phase (building of associations), was significantly better in the multisensory paradigm, but there was no difference between the reaction times (RTs). Performance during the retrieval phase (where the previously learned associations are tested) was also significantly better in the multisensory paradigm, and RTs were significantly shorter. On the other hand, transfer (generalization) performance (where hitherto not learned but predictable associations are tested) was not significantly enhanced in the multisensory paradigm, while RTs were somewhat shorter. Linear regression analysis revealed that all the studied psychophysical parameters in both paradigms showed significant correlation with the age of the participants. Audiovisual stimulation enhanced acquisition and retrieval as compared to visual stimulation only, regardless of whether the subjects were above or below 12 years of age. Our results demonstrate that multisensory stimuli significantly enhance association learning and retrieval in the context of sensory guided equivalence learning in healthy children and adolescents. However, the audiovisual gain was significantly higher in the cohort below 12 years of age, which suggests that audiovisually guided equivalence learning is still in development in childhood.
Collapse
Affiliation(s)
- Gabriella Eördegh
- Faculty of Health Sciences and Social Studies, University of Szeged, Szeged, Hungary
| | - Kálmán Tót
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Ádám Kiss
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Szabolcs Kéri
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Gábor Braunitzer
- Nyírő Gyula Hospital, Laboratory for Perception & Cognition and Clinical Neuroscience, Budapest, Hungary
| | - Attila Nagy
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
- * E-mail:
| |
Collapse
|
4
|
Wang Z, Chen M, Goerlich KS, Aleman A, Xu P, Luo Y. Deficient auditory emotion processing but intact emotional multisensory integration in alexithymia. Psychophysiology 2021; 58:e13806. [PMID: 33742708 PMCID: PMC9285530 DOI: 10.1111/psyp.13806] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 01/29/2021] [Accepted: 02/24/2021] [Indexed: 11/29/2022]
Abstract
Alexithymia has been associated with emotion recognition deficits in both auditory and visual domains. Although emotions are inherently multimodal in daily life, little is known regarding abnormalities of emotional multisensory integration (eMSI) in relation to alexithymia. Here, we employed an emotional Stroop‐like audiovisual task while recording event‐related potentials (ERPs) in individuals with high alexithymia levels (HA) and low alexithymia levels (LA). During the task, participants had to indicate whether a voice was spoken in a sad or angry prosody while ignoring the simultaneously presented static face which could be either emotionally congruent or incongruent to the human voice. We found that HA performed worse and showed higher P2 amplitudes than LA independent of emotion congruency. Furthermore, difficulties in identifying and describing feelings were positively correlated with the P2 component, and P2 correlated negatively with behavioral performance. Bayesian statistics showed no group differences in eMSI and classical integration‐related ERP components (N1 and N2). Although individuals with alexithymia indeed showed deficits in auditory emotion recognition as indexed by decreased performance and higher P2 amplitudes, the present findings suggest an intact capacity to integrate emotional information from multiple channels in alexithymia. Our work provides valuable insights into the relationship between alexithymia and neuropsychological mechanisms of emotional multisensory integration. Our behavioral and electrophysiological data provide substantial evidence for intact emotion multisensory integration in relation to alexithymia. With high ecological validity, these findings are of particular importance given that humans are constantly exposed to competing, complex audiovisual emotional information in social interaction contexts. Our work has important implications for the psychophysiology of alexithymia and emotional processing.
Collapse
Affiliation(s)
- Zhihao Wang
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Mai Chen
- School of Psychology, Shenzhen University, Shenzhen, China
| | - Katharina S Goerlich
- Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - André Aleman
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Pengfei Xu
- State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China.,Guangdong-Hong Kong-Macao Greater Bay Area Research Institute for Neuroscience and Neurotechnologies, Kwun Tong, Hong Kong, China
| | - Yuejia Luo
- Shenzhen Key Laboratory of Affective and Social Neuroscience, Magnetic Resonance Imaging, Center for Brain Disorders and Cognitive Sciences, Shenzhen University, Shenzhen, China.,State Key Laboratory of Cognitive and Learning, Faculty of Psychology, Beijing Normal University, Beijing, China.,Department of Psychology, Southern Medical University, Guangzhou, China.,The Research Center of Brain Science and Visual Cognition, Medical School, Kunming University of Science and Technology, Kunming, China.,Center for Neuroimaging, Shenzhen Institute of Neuroscience, Shenzhen, China
| |
Collapse
|
5
|
Measuring Farm Animal Emotions-Sensor-Based Approaches. SENSORS 2021; 21:s21020553. [PMID: 33466737 PMCID: PMC7830443 DOI: 10.3390/s21020553] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 01/11/2021] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
Understanding animal emotions is a key to unlocking methods for improving animal welfare. Currently there are no ‘benchmarks’ or any scientific assessments available for measuring and quantifying the emotional responses of farm animals. Using sensors to collect biometric data as a means of measuring animal emotions is a topic of growing interest in agricultural technology. Here we reviewed several aspects of the use of sensor-based approaches in monitoring animal emotions, beginning with an introduction on animal emotions. Then we reviewed some of the available technological systems for analyzing animal emotions. These systems include a variety of sensors, the algorithms used to process biometric data taken from these sensors, facial expression, and sound analysis. We conclude that a single emotional expression measurement based on either the facial feature of animals or the physiological functions cannot show accurately the farm animal’s emotional changes, and hence compound expression recognition measurement is required. We propose some novel ways to combine sensor technologies through sensor fusion into efficient systems for monitoring and measuring the animals’ compound expression of emotions. Finally, we explore future perspectives in the field, including challenges and opportunities.
Collapse
|
6
|
A crowd of emotional voices influences the perception of emotional faces: Using adaptation, stimulus salience, and attention to probe audio-visual interactions for emotional stimuli. Atten Percept Psychophys 2020; 82:3973-3992. [PMID: 32935292 DOI: 10.3758/s13414-020-02104-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Correctly assessing the emotional state of others is a crucial part of social interaction. While facial expressions provide much information, faces are often not viewed in isolation, but occur with concurrent sounds, usually voices, which also provide information about the emotion being portrayed. Many studies have examined the crossmodal processing of faces and sounds, but results have been mixed, with different paradigms yielding different results. Using a psychophysical adaptation paradigm, we carried out a series of four experiments to determine whether there is a perceptual advantage when faces and voices match in emotion (congruent), versus when they do not match (incongruent). We presented a single face and a crowd of voices, a crowd of faces and a crowd of voices, a single face of reduced salience and a crowd of voices, and tested this last condition with and without attention directed to the emotion in the face. While we observed aftereffects in the hypothesized direction (adaptation to faces conveying positive emotion yielded negative, contrastive, perceptual aftereffects), we only found a congruent advantage (stronger adaptation effects) when faces were attended and of reduced salience, in line with the theory of inverse effectiveness.
Collapse
|
7
|
Raheel A, Majid M, Alnowami M, Anwar SM. Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia. SENSORS 2020; 20:s20144037. [PMID: 32708056 PMCID: PMC7411620 DOI: 10.3390/s20144037] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 05/12/2020] [Accepted: 05/14/2020] [Indexed: 12/18/2022]
Abstract
Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57% as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76% (for four emotions) when interacting with tactile enhanced multimedia.
Collapse
Affiliation(s)
- Aasim Raheel
- Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan;
| | - Muhammad Majid
- Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan;
- Correspondence:
| | - Majdi Alnowami
- Department of Nuclear Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| | - Syed Muhammad Anwar
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan;
| |
Collapse
|
8
|
Design and Characterization of an EEG-Hat for Reliable EEG Measurements. MICROMACHINES 2020; 11:mi11070635. [PMID: 32605330 PMCID: PMC7407528 DOI: 10.3390/mi11070635] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 06/24/2020] [Accepted: 06/25/2020] [Indexed: 11/22/2022]
Abstract
In this study, a new hat-type electroencephalogram (EEG) device with candle-like microneedle electrodes (CMEs), called an EEG-Hat, was designed and fabricated. CMEs are dry EEG electrodes that can measure high-quality EEG signals without skin treatment or conductive gels. One of the challenges in the measurement of high-quality EEG signals is the fixation of electrodes to the skin, i.e., the design of a good EEG headset. The CMEs were able to achieve good contact with the scalp for heads of different sizes and shapes, and the EEG-Hat has a shutter mechanism to separate the hair and ensure good contact between the CMEs and the scalp. Simultaneous measurement of EEG signals from five measurement points on the scalp was successfully conducted after a simple and brief setup process. The EEG-Hat is expected to contribute to the advancement of EEG research.
Collapse
|
9
|
Lu Z, Li Q, Gao N, Yang J, Bai O. Happy emotion cognition of bimodal audiovisual stimuli optimizes the performance of the P300 speller. Brain Behav 2019; 9:e01479. [PMID: 31729840 PMCID: PMC6908870 DOI: 10.1002/brb3.1479] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 09/17/2019] [Accepted: 10/26/2019] [Indexed: 12/18/2022] Open
Abstract
OBJECTIVE Prior studies of emotional cognition have found that emotion-based bimodal face and voice stimuli can elicit larger event-related potential (ERP) amplitudes and enhance neural responses compared with visual-only emotional face stimuli. Recent studies on brain-computer interface have shown that emotional face stimuli have significantly improved the performance of the traditional P300 speller system, but its performance needs to be further improved for practical applications. Therefore, we herein propose a novel audiovisual P300 speller based on bimodal emotional cognition to further improve the performance of the P300 system. METHODS The audiovisual P300 speller we proposed is based on happy emotions, with visual and auditory stimuli that consist of several pairs of smiling faces and audible chuckles (E-AV spelling paradigm) of different ages and sexes. The control paradigm was the visual-only emotional face P300 speller (E-V spelling paradigm). RESULTS We compared the ERP amplitudes, accuracy, and raw bit rate between the E-AV and E-V spelling paradigms. The target stimuli elicited significantly increased P300 amplitudes (p < .05) and P600 amplitudes (p < .05) in the E-AV spelling paradigm compared with those in the E-V paradigm. The E-AV spelling paradigm also significantly improved the spelling accuracy and the raw bit rate compared with those in the E-V paradigm at one superposition (p < .05) and at two superpositions (p < .05). SIGNIFICANCE The proposed emotion-based audiovisual spelling paradigm not only significantly improves the performance of the P300 speller, but also provides a basis for the development of various bimodal P300 speller systems, which is a step forward in the clinical application of brain-computer interfaces.
Collapse
Affiliation(s)
- Zhaohua Lu
- School of Computer Science and TechnologyChangchun University of Science and TechnologyChangchunChina
| | - Qi Li
- School of Computer Science and TechnologyChangchun University of Science and TechnologyChangchunChina
| | - Ning Gao
- School of Computer Science and TechnologyChangchun University of Science and TechnologyChangchunChina
| | - Jingjing Yang
- School of Computer Science and TechnologyChangchun University of Science and TechnologyChangchunChina
| | - Ou Bai
- Department of Electrical and Computer EngineeringFlorida International UniversityMiamiFLUSA
| |
Collapse
|
10
|
Ding X, Liu J, Kang T, Wang R, Kret ME. Automatic Change Detection of Emotional and Neutral Body Expressions: Evidence From Visual Mismatch Negativity. Front Psychol 2019; 10:1909. [PMID: 31507485 PMCID: PMC6716465 DOI: 10.3389/fpsyg.2019.01909] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Accepted: 08/05/2019] [Indexed: 11/13/2022] Open
Abstract
Rapidly and effectively detecting emotions in others is an important social skill. Since emotions expressed by the face are relatively easy to fake or hide, we often use body language to gauge the genuine emotional state of others. Recent studies suggest that expression-related visual mismatch negativity (vMMN) reflects the automatic processing of emotional changes in facial expression; however, the automatic processing of changes in body expression has not yet been studied systematically. The current study uses an oddball paradigm where neutral body actions served as standard stimuli, while fearful body expressions and other neutral body actions served as two different deviants to define body-related vMMN, and to compare the mechanisms underlying the processing of emotional changes to neutral postural changes. The results show a more negative vMMN amplitude for fear deviants 210-260 ms after stimulus onset which corresponds with the negativity bias that was obtained on the N190 component. In earlier time windows, the vMMN amplitude following the two types of deviant stimuli are identical. Therefore, we present a two-stage model for processing changes in body posture, where changes in body posture are processed in the first 170-210 ms, but emotional changes in the time window of 210-260 ms.
Collapse
Affiliation(s)
- Xiaobin Ding
- Psychology Department, Northwest Normal University, Lanzhou, China.,Key Laboratory of Behavioral and Mental Health of Gansu Province, Lanzhou, China
| | - Jianyi Liu
- Psychology Department, Northwest Normal University, Lanzhou, China.,Key Laboratory of Behavioral and Mental Health of Gansu Province, Lanzhou, China
| | - Tiejun Kang
- Psychology Department, Northwest Normal University, Lanzhou, China.,Key Laboratory of Behavioral and Mental Health of Gansu Province, Lanzhou, China
| | - Rui Wang
- Psychology Department, Northwest Normal University, Lanzhou, China.,Key Laboratory of Behavioral and Mental Health of Gansu Province, Lanzhou, China
| | - Mariska E Kret
- Cognitive Psychology Department, Leiden University, Leiden, Netherlands.,Leiden Institute for Brain and Cognition (LIBC), Leiden, Netherlands
| |
Collapse
|
11
|
Izen SC, Lapp HE, Harris DA, Hunter RG, Ciaramitaro VM. Seeing a Face in a Crowd of Emotional Voices: Changes in Perception and Cortisol in Response to Emotional Information across the Senses. Brain Sci 2019; 9:brainsci9080176. [PMID: 31349644 PMCID: PMC6721384 DOI: 10.3390/brainsci9080176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 07/01/2019] [Accepted: 07/24/2019] [Indexed: 11/17/2022] Open
Abstract
One source of information we glean from everyday experience, which guides social interaction, is assessing the emotional state of others. Emotional state can be expressed through several modalities: body posture or movements, body odor, touch, facial expression, or the intonation in a voice. Much research has examined emotional processing within one sensory modality or the transfer of emotional processing from one modality to another. Yet, less is known regarding interactions across different modalities when perceiving emotions, despite our common experience of seeing emotion in a face while hearing the corresponding emotion in a voice. Our study examined if visual and auditory emotions of matched valence (congruent) conferred stronger perceptual and physiological effects compared to visual and auditory emotions of unmatched valence (incongruent). We quantified how exposure to emotional faces and/or voices altered perception using psychophysics and how it altered a physiological proxy for stress or arousal using salivary cortisol. While we found no significant advantage of congruent over incongruent emotions, we found that changes in cortisol were associated with perceptual changes. Following exposure to negative emotional content, larger decreases in cortisol, indicative of less stress, correlated with more positive perceptual after-effects, indicative of stronger biases to see neutral faces as happier.
Collapse
Affiliation(s)
- Sarah C Izen
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Hannah E Lapp
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Daniel A Harris
- Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 3M7, Canada
| | - Richard G Hunter
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
12
|
Eördegh G, Őze A, Bodosi B, Puszta A, Pertich Á, Rosu A, Godó G, Nagy A. Multisensory guided associative learning in healthy humans. PLoS One 2019; 14:e0213094. [PMID: 30861023 PMCID: PMC6413907 DOI: 10.1371/journal.pone.0213094] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Accepted: 02/14/2019] [Indexed: 12/15/2022] Open
Abstract
Associative learning is a basic cognitive function by which discrete and often different percepts are linked together. The Rutgers Acquired Equivalence Test investigates a specific kind of associative learning, visually guided equivalence learning. The test consists of an acquisition (pair learning) and a test (rule transfer) phase, which are associated primarily with the function of the basal ganglia and the hippocampi, respectively. Earlier studies described that both fundamentally-involved brain structures in the visual associative learning, the basal ganglia and the hippocampi, receive not only visual but also multisensory information. However, no study has investigated whether there is a priority for multisensory guided equivalence learning compared to unimodal ones. Thus we had no data about the modality-dependence or independence of the equivalence learning. In the present study, we have therefore introduced the auditory- and multisensory (audiovisual)-guided equivalence learning paradigms and investigated the performance of 151 healthy volunteers in the visual as well as in the auditory and multisensory paradigms. Our results indicated that visual, auditory and multisensory guided associative learning is similarly effective in healthy humans, which suggest that the acquisition phase is fairly independent from the modality of the stimuli. On the other hand, in the test phase, where participants were presented with acquisitions that were learned earlier and associations that were until then not seen or heard but predictable, the multisensory stimuli elicited the best performance. The test phase, especially its generalization part, seems to be a harder cognitive task, where the multisensory information processing could improve the performance of the participants.
Collapse
Affiliation(s)
- Gabriella Eördegh
- Department of Operative and Esthetic Dentistry, Faculty of Dentistry, University of Szeged, Szeged, Hungary
| | - Attila Őze
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Balázs Bodosi
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - András Puszta
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Ákos Pertich
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - Anett Rosu
- Department of Psychiatry, Faculty of Medicine, University of Szeged, Szeged, Hungary
| | - György Godó
- Csongrád County Health Care Center, Psychiatric Outpatient Care, Hódmezővásárhely, Hungary
| | - Attila Nagy
- Department of Physiology, Faculty of Medicine, University of Szeged, Szeged, Hungary
- * E-mail:
| |
Collapse
|
13
|
Role of the human mirror system in automatic processing of musical emotion: Evidence from EEG. ACTA PSYCHOLOGICA SINICA 2019. [DOI: 10.3724/sp.j.1041.2019.00795] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
14
|
Zhang H, Chen X, Chen S, Li Y, Chen C, Long Q, Yuan J. Facial Expression Enhances Emotion Perception Compared to Vocal Prosody: Behavioral and fMRI Studies. Neurosci Bull 2018; 34:801-815. [PMID: 29740753 DOI: 10.1007/s12264-018-0231-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2017] [Accepted: 03/13/2018] [Indexed: 02/07/2023] Open
Abstract
Facial and vocal expressions are essential modalities mediating the perception of emotion and social communication. Nonetheless, currently little is known about how emotion perception and its neural substrates differ across facial expression and vocal prosody. To clarify this issue, functional MRI scans were acquired in Study 1, in which participants were asked to discriminate the valence of emotional expression (angry, happy or neutral) from facial, vocal, or bimodal stimuli. In Study 2, we used an affective priming task (unimodal materials as primers and bimodal materials as target) and participants were asked to rate the intensity, valence, and arousal of the targets. Study 1 showed higher accuracy and shorter response latencies in the facial than in the vocal modality for a happy expression. Whole-brain analysis showed enhanced activation during facial compared to vocal emotions in the inferior temporal-occipital regions. Region of interest analysis showed a higher percentage signal change for facial than for vocal anger in the superior temporal sulcus. Study 2 showed that facial relative to vocal priming of anger had a greater influence on perceived emotion for bimodal targets, irrespective of the target valence. These findings suggest that facial expression is associated with enhanced emotion perception compared to equivalent vocal prosodies.
Collapse
Affiliation(s)
- Heming Zhang
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China
| | - Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, 710062, China.,Key Laboratory of Modern Teaching Technology of the Ministry of Education, Shaanxi Normal University, Xi'an, 710062, China
| | - Shengdong Chen
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China
| | - Yansong Li
- Department of Psychology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, China
| | - Changming Chen
- School of Educational Sciences, Xinyang Normal University, Xinyang, 464000, China
| | - Quanshan Long
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China
| | - Jiajin Yuan
- Key Laboratory of Cognition and Personality of the Ministry of Education, School of Psychology, Southwest University, Chongqing, 400715, China.
| |
Collapse
|
15
|
Meconi F, Doro M, Schiano Lomoriello A, Mastrella G, Sessa P. Neural measures of the role of affective prosody in empathy for pain. Sci Rep 2018; 8:291. [PMID: 29321532 PMCID: PMC5762917 DOI: 10.1038/s41598-017-18552-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 12/14/2017] [Indexed: 01/10/2023] Open
Abstract
Emotional communication often needs the integration of affective prosodic and semantic components from speech and the speaker’s facial expression. Affective prosody may have a special role by virtue of its dual-nature; pre-verbal on one side and accompanying semantic content on the other. This consideration led us to hypothesize that it could act transversely, encompassing a wide temporal window involving the processing of facial expressions and semantic content expressed by the speaker. This would allow powerful communication in contexts of potential urgency such as witnessing the speaker’s physical pain. Seventeen participants were shown with faces preceded by verbal reports of pain. Facial expressions, intelligibility of the semantic content of the report (i.e., participants’ mother tongue vs. fictional language) and the affective prosody of the report (neutral vs. painful) were manipulated. We monitored event-related potentials (ERPs) time-locked to the onset of the faces as a function of semantic content intelligibility and affective prosody of the verbal reports. We found that affective prosody may interact with facial expressions and semantic content in two successive temporal windows, supporting its role as a transverse communication cue.
Collapse
Affiliation(s)
- Federica Meconi
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | - Mattia Doro
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | | | - Giulia Mastrella
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | - Paola Sessa
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy.
| |
Collapse
|
16
|
Review and Classification of Emotion Recognition Based on EEG Brain-Computer Interface System Research: A Systematic Review. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app7121239] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
17
|
Pan Z, Liu X, Luo Y, Chen X. Emotional Intensity Modulates the Integration of Bimodal Angry Expressions: ERP Evidence. Front Neurosci 2017; 11:349. [PMID: 28680388 PMCID: PMC5478688 DOI: 10.3389/fnins.2017.00349] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 06/06/2017] [Indexed: 11/18/2022] Open
Abstract
Integration of information from face and voice plays a central role in social interactions. The present study investigated the modulation of emotional intensity on the integration of facial-vocal emotional cues by recording EEG for participants while they were performing emotion identification task on facial, vocal, and bimodal angry expressions varying in emotional intensity. Behavioral results showed the rates of anger and reaction speed increased as emotional intensity across modalities. Critically, the P2 amplitudes were larger for bimodal expressions than for the sum of facial and vocal expressions for low emotional intensity stimuli, but not for middle and high emotional intensity stimuli. These findings suggested that emotional intensity modulates the integration of facial-vocal angry expressions, following the principle of Inverse Effectiveness (IE) in multimodal sensory integration.
Collapse
Affiliation(s)
- Zhihui Pan
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China
| | - Xi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, School of Brain Cognitive Science, Beijing Normal UniversityBeijing, China
| | - Yangmei Luo
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China
| | - Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal UniversityXi'an, China
| |
Collapse
|
18
|
Chen X, Zheng T, Han L, Chang Y, Luo Y. The neural dynamics underlying the interpersonal effects of emotional expression on decision making. Sci Rep 2017; 7:46651. [PMID: 28425491 PMCID: PMC5397974 DOI: 10.1038/srep46651] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Accepted: 03/24/2017] [Indexed: 11/20/2022] Open
Abstract
Although numerous studies explore the effects of emotion on decision-making, the existing research has mainly focused on the influence of intrapersonal emotions, leaving the influence of one person’s emotions on another’s decisions underestimated. To specify how interpersonal emotions shape decision-making and delineate the underlying neural dynamics involved, the present study examined brain responses to utilitarian feedback combined with angry or happy faces in competitive and cooperative contexts. Behavioral results showed that participants responded slower following losses than wins when competitors express happiness but responded faster following losses than wins when cooperators express anger. Importantly, angry faces in competitive context reversed the differentiation pattern of feedback-related negativity (FRN) between losses and wins and diminished the difference between losses and wins on both P300 and theta power, but only diminished the difference on FRN between losses and wins in cooperative context. However, when partner displays happiness, losses versus wins elicited larger FRN and theta power in competitive context but smaller P300 in both contexts. These results suggest that interpersonal emotions shape decisions during both automatic motivational salience valuation (FRN) and conscious cognitive appraisal (P300) stages of processing, in which different emotional expressions exert interpersonal influence through different routes.
Collapse
Affiliation(s)
- Xuhai Chen
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China.,Key Laboratory of Modern Teaching Technology, Ministry of Education, Shaanxi Normal University, Xi'an 710062, China
| | - Tingting Zheng
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Lingzi Han
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Yingchao Chang
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| | - Yangmei Luo
- Key Laboratory of Behavior and Cognitive Psychology in Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an 710062, China
| |
Collapse
|
19
|
Jin Y, Mao Z, Ling Z, Xu X, Xie G, Yu X. Altered emotional prosody processing in patients with Parkinson's disease after subthalamic nucleus stimulation. Neuropsychiatr Dis Treat 2017; 13:2965-2975. [PMID: 29270014 PMCID: PMC5729839 DOI: 10.2147/ndt.s153505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Patients with Parkinson's disease (PD) exhibit deficits in recognizing and expressing vocal emotional prosody. The aim of this study was to explore emotional prosody processing in patients with PD shortly after subthalamic nucleus (STN) deep brain stimulation (DBS). METHODS Two groups of patients with PD (pre-DBS and post-DBS) and one healthy control (HC) group were recruited as participants. All participants (PD and HC) were assessed using the Montreal Affective Voices database 50 Voices Recognition test. All participants were asked to nonverbally express five basic emotions (happiness, anger, fear, sadness, and neutral) to test emotional prosody expression. Fifteen native Chinese speakers were recruited as raters. We recorded the accuracy rate, reaction time, confidence level, and two acoustic parameters (mean pitch and mean intensity). RESULTS The PD groups scored lower than the HC group in recognizing and expressing emotional prosody. STN DBS had no significant effect on the recognition of emotional prosody but had a significant effect on fear prosody expression. Pearson's correlation analysis revealed significant correlations between performance on emotional prosody recognition tests and performance on emotional prosody expression tests in both the pre-DBS PD and post-DBS PD groups. CONCLUSION Shortly after STN DBS, the ability to recognize emotional prosody was not altered, but fear expression was impaired. We identified associations between abnormalities in emotional prosody recognition and expression deficits both before and after STN DBS, indicating that the processes involved in recognizing and expressing emotional prosody may share a common system.
Collapse
Affiliation(s)
- Yazhou Jin
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Zhiqi Mao
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Zhipei Ling
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Xin Xu
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Guang Xie
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| | - Xinguang Yu
- Department of Neurosurgery, People's Liberation Army General Hospital, Beijing, People's Republic of China
| |
Collapse
|
20
|
Symons AE, El-Deredy W, Schwartze M, Kotz SA. The Functional Role of Neural Oscillations in Non-Verbal Emotional Communication. Front Hum Neurosci 2016; 10:239. [PMID: 27252638 PMCID: PMC4879141 DOI: 10.3389/fnhum.2016.00239] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 05/09/2016] [Indexed: 12/18/2022] Open
Abstract
Effective interpersonal communication depends on the ability to perceive and interpret nonverbal emotional expressions from multiple sensory modalities. Current theoretical models propose that visual and auditory emotion perception involves a network of brain regions including the primary sensory cortices, the superior temporal sulcus (STS), and orbitofrontal cortex (OFC). However, relatively little is known about how the dynamic interplay between these regions gives rise to the perception of emotions. In recent years, there has been increasing recognition of the importance of neural oscillations in mediating neural communication within and between functional neural networks. Here we review studies investigating changes in oscillatory activity during the perception of visual, auditory, and audiovisual emotional expressions, and aim to characterize the functional role of neural oscillations in nonverbal emotion perception. Findings from the reviewed literature suggest that theta band oscillations most consistently differentiate between emotional and neutral expressions. While early theta synchronization appears to reflect the initial encoding of emotionally salient sensory information, later fronto-central theta synchronization may reflect the further integration of sensory information with internal representations. Additionally, gamma synchronization reflects facilitated sensory binding of emotional expressions within regions such as the OFC, STS, and, potentially, the amygdala. However, the evidence is more ambiguous when it comes to the role of oscillations within the alpha and beta frequencies, which vary as a function of modality (or modalities), presence or absence of predictive information, and attentional or task demands. Thus, the synchronization of neural oscillations within specific frequency bands mediates the rapid detection, integration, and evaluation of emotional expressions. Moreover, the functional coupling of oscillatory activity across multiples frequency bands supports a predictive coding model of multisensory emotion perception in which emotional facial and body expressions facilitate the processing of emotional vocalizations.
Collapse
Affiliation(s)
- Ashley E. Symons
- School of Psychological Sciences, University of ManchesterManchester, UK
| | - Wael El-Deredy
- School of Psychological Sciences, University of ManchesterManchester, UK
- School of Biomedical Engineering, Universidad de ValparaisoValparaiso, Chile
| | - Michael Schwartze
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| | - Sonja A. Kotz
- School of Psychological Sciences, University of ManchesterManchester, UK
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|