1
|
Quirin M, Malekzad F, Jais M, Kehr H, Ennis M. Heart rate variability and psychological health: The key role of trait emotional awareness. Acta Psychol (Amst) 2024; 246:104252. [PMID: 38677024 DOI: 10.1016/j.actpsy.2024.104252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 03/19/2024] [Accepted: 04/08/2024] [Indexed: 04/29/2024] Open
Abstract
Studies have shown that Trait Emotional Awareness (TEA) - the ability to recognize one's emotions - and Heart Rate Variability (HRV) are both negatively associated with psychological disorders. Although these studies imply that TEA is related to HRV and may explain the association between HRV and psychological disorders, there is limited research investigating this implication. Such investigation is essential to illuminate the psychophysiological processes linked to psychological disorders. The present study aims to investigate a) the association between TEA and HRV, b) the association between HRV and psychological disorders, and c) whether TEA explains the association between HRV and psychological disorders. A sample of 41 German students completed self-report questionnaires as indicators of psychological disorders, including the Hospital Anxiety and Depression Scale (HADS; Snaith & Zigmond, 1983) for anxiousness and depressiveness, as well as the somatization scale of the Hopkins Symptom Checklist (HSCL; Derogatis et al., 1976) for physical complaints. HRV was measured at baseline (resting HRV) and during exposure to a fear-provoking movie clip (reactive HRV). As hypothesized, a) TEA showed a positive association with reactive HRV, b) HRV showed negative associations with anxiousness and physical complaints, and c) TEA explained the relationships between reactive HRV and anxiousness, as well as physical complaints. Contrary to our hypothesis, we did not find any association between HRV and depressiveness. We discussed the contribution of TEA to psychophysiological health, limited generalizability of the current study, and direct future research to explore the underlying mechanisms linking TEA to health.
Collapse
Affiliation(s)
- Markus Quirin
- Technical University of Munich, Germany; PFH Göttingen, Germany.
| | - Farhood Malekzad
- Technical University of Munich, Germany; PFH Göttingen, Germany.
| | | | - Hugo Kehr
- Technical University of Munich, Germany.
| | | |
Collapse
|
2
|
İyilikci EA, Boğa M, Yüvrük E, Özkılıç Y, İyilikci O, Amado S. An extended emotion-eliciting film clips set (EGEFILM): assessment of emotion ratings for 104 film clips in a Turkish sample. Behav Res Methods 2024; 56:529-562. [PMID: 36737582 DOI: 10.3758/s13428-022-02055-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/19/2022] [Indexed: 02/05/2023]
Abstract
The primary aim of this study was to test emotion-elicitation levels of widely used film clips in a Turkish sample and to expand existing databases by adding several new film clips with the capacity to elicit a wide range of emotions, including a rarely studied emotion category, i.e., calmness. For this purpose, we conducted a comprehensive review of prior studies and collected a large number of new suggestions from a Turkish sample to select film clips for eight emotion categories: amusement, tenderness, calmness, anger, sadness, disgust, fear, and neutrality. Furthermore, we aimed to assess emotion-eliciting levels of short video clips, mostly taken by amateur video footage. In total, 104 film clips were tested online by rating several affective dimensions. Self-reported emotional experience was assessed in terms of intensity, discreteness, valence, and arousal. It was found that at least one of the existing film clips, most of the new film clips, and the short video clips were successful at eliciting medium to high levels of target emotions. However, we also observed overlaps between certain emotions (e.g., tenderness-sadness, anger-sadness-disgust, or fear-anxiety). The current results are mostly in line with previous databases, suggesting that film clips are efficient at eliciting a wide range of emotions where cultural background might play a role in the elicitation of certain emotions (e.g., amusement, anger, etc.). We hope that this extended emotion-eliciting film clips set (EGEFILM) will provide a rich resource for future emotion research both in Turkey and the international area.
Collapse
Affiliation(s)
| | - Merve Boğa
- Department of Psychology, Ege University, Bornova, 35400, Izmir, Turkey
| | - Elif Yüvrük
- Department of Psychology, Ege University, Bornova, 35400, Izmir, Turkey
| | - Yıldız Özkılıç
- Department of Psychology, İzmir Bakırçay University, Izmir, Turkey
| | - Osman İyilikci
- Department of Psychology, Manisa Celal Bayar University, Manisa, Turkey
| | - Sonia Amado
- Department of Psychology, Ege University, Bornova, 35400, Izmir, Turkey
| |
Collapse
|
3
|
Gao C, Uchitomi H, Miyake Y. Cross-Sensory EEG Emotion Recognition with Filter Bank Riemannian Feature and Adversarial Domain Adaptation. Brain Sci 2023; 13:1326. [PMID: 37759927 PMCID: PMC10526196 DOI: 10.3390/brainsci13091326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 09/04/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
Emotion recognition is crucial in understanding human affective states with various applications. Electroencephalography (EEG)-a non-invasive neuroimaging technique that captures brain activity-has gained attention in emotion recognition. However, existing EEG-based emotion recognition systems are limited to specific sensory modalities, hindering their applicability. Our study innovates EEG emotion recognition, offering a comprehensive framework for overcoming sensory-focused limits and cross-sensory challenges. We collected cross-sensory emotion EEG data using multimodal emotion simulations (three sensory modalities: audio/visual/audio-visual with two emotion states: pleasure or unpleasure). The proposed framework-filter bank adversarial domain adaptation Riemann method (FBADR)-leverages filter bank techniques and Riemannian tangent space methods for feature extraction from cross-sensory EEG data. Compared with Riemannian methods, filter bank and adversarial domain adaptation could improve average accuracy by 13.68% and 8.36%, respectively. Comparative analysis of classification results proved that the proposed FBADR framework achieved a state-of-the-art cross-sensory emotion recognition performance and reached an average accuracy of 89.01% ± 5.06%. Moreover, the robustness of the proposed methods could ensure high cross-sensory recognition performance under a signal-to-noise ratio (SNR) ≥ 1 dB. Overall, our study contributes to the EEG-based emotion recognition field by providing a comprehensive framework that overcomes limitations of sensory-oriented approaches and successfully tackles the difficulties of cross-sensory situations.
Collapse
Affiliation(s)
- Chenguang Gao
- Department of Computer Science, Tokyo Institute of Technology, Yokohama 226-8502, Japan; (H.U.); (Y.M.)
| | | | | |
Collapse
|
4
|
Gao C, Uchitomi H, Miyake Y. Influence of Multimodal Emotional Stimulations on Brain Activity: An Electroencephalographic Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:4801. [PMID: 37430714 PMCID: PMC10221168 DOI: 10.3390/s23104801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/05/2023] [Accepted: 05/12/2023] [Indexed: 07/12/2023]
Abstract
This study aimed to reveal the influence of emotional valence and sensory modality on neural activity in response to multimodal emotional stimuli using scalp EEG. In this study, 20 healthy participants completed the emotional multimodal stimulation experiment for three stimulus modalities (audio, visual, and audio-visual), all of which are from the same video source with two emotional components (pleasure or unpleasure), and EEG data were collected using six experimental conditions and one resting state. We analyzed power spectral density (PSD) and event-related potential (ERP) components in response to multimodal emotional stimuli, for spectral and temporal analysis. PSD results showed that the single modality (audio only/visual only) emotional stimulation PSD differed from multi-modality (audio-visual) in a wide brain and band range due to the changes in modality and not from the changes in emotional degree. The most pronounced N200-to-P300 potential shifts occurred in monomodal rather than multimodal emotional stimulations. This study suggests that emotional saliency and sensory processing efficiency perform a significant role in shaping neural activity during multimodal emotional stimulation, with the sensory modality being more influential in PSD. These findings contribute to our understanding of the neural mechanisms involved in multimodal emotional stimulation.
Collapse
Affiliation(s)
- Chenguang Gao
- Department of Computer Science, Tokyo Institute of Technology, Yokohama 226-8502, Japan; (H.U.); (Y.M.)
| | | | | |
Collapse
|
5
|
Deng Y, Wang Y, Xu L, Meng X, Wang L. Do you like it or not? Identifying preference using an electroencephalogram during the viewing of short videos. Psych J 2023. [PMID: 37186458 DOI: 10.1002/pchj.645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
Accurately predicting whether a short video will be liked by viewers is a topic of interest to media researchers. This study used an electroencephalogram (EEG) to record neural activity in 109 participants as they watched short videos (16 clips per person) to see which neural signals reflected viewers' preferences. The results showed that, compared with the short videos they disliked, individuals would experience positive emotions [indexed by a higher theta power, lower (beta - theta)/(beta + theta) score], more relaxed states (indexed by a lower beta power), lower levels of mental engagement and alertness [indexed by a lower beta/(alpha + theta) score], and devote more attention (indexed by lower alpha/theta) when watching short videos they liked. We further used artificial neural networks to classify the neural signals of different preferences induced by short videos. The classification accuracy was the highest when using data from bands over the whole brain, which was 75.78%. These results may indicate the potential of EEG measurement to evaluate the subjective preferences of individuals for short videos.
Collapse
Affiliation(s)
- Yaling Deng
- State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, China
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, China
| | - Ye Wang
- State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, China
- Neuroscience and Intelligent Media Institute, Communication University of China, Beijing, China
| | - Liming Xu
- School of Journalism, Communication University of China, Beijing, China
| | - Xiangli Meng
- School of International Studies, Communication University of China, Beijing, China
| | - Lingxiao Wang
- School of Animation and Digital Art, Communication University of China, Beijing, China
| |
Collapse
|
6
|
Ruiz-Padial E, Moreno-Padilla M, Reyes Del Paso GA. Did you get the joke? Physiological, subjective and behavioral responses to mirth. Psychophysiology 2023; 60:e14292. [PMID: 36938983 DOI: 10.1111/psyp.14292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/21/2023]
Abstract
Mirth is elicited by the perception of humor, which requires the resolution of an incongruity in an unexpected and playful manner. Previous psychophysiological research using affective pictures is scarce, and did not elucidate the cognitive and affective components of the humor process. In this study, the passive viewing paradigm is applied to mirthful, incongruent, neutral and erotic pictures to characterize the emotional response of mirth. Physiological (zygomaticus major [ZM] activity, skin conductance response [SCR] and heart rate [HR]), behavioral (free viewing time) and subjective responses (mirthful ratings) were recorded from 63 participants. The presence of an inflection change in the ZM response and mirthfulness ratings were used as markers of humor comprehension. Participants showed the greatest ZM and HR in response to mirthful compared to incongruent, erotic and neutral pictures, as well as a stronger SCR response to mirthful compared to incongruent and neutral pictures. The overall results shed light on the temporal course of the humor process, suggesting that humor comprehension (cognitive component) occurred around 1000-1500 ms after picture onset, according to the ZM and SCR responses; and the humor appreciation stage (emotional component) occurred at around 3500 ms after stimulus onset, according to the HR and SCR changes. Moreover, marked interindividual variability was observed in the number of smiles, and in the pictures that provoked them. This points to the complexity of the humor process, and suggests the need to develop methods to elicit mirth and elucidate the factors potentially underlying individual differences in humor.
Collapse
|
7
|
Chen R, Liu Y. A Study on Chinese Audience's Receptive Behavior towards Chinese and Western Cultural Hybridity Films Based on Grounded Theory-Taking Disney's Animated Film Turning Red as an Example. Behav Sci (Basel) 2023; 13:bs13020135. [PMID: 36829364 PMCID: PMC9952482 DOI: 10.3390/bs13020135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/18/2023] [Accepted: 01/29/2023] [Indexed: 02/08/2023] Open
Abstract
For a long time, Chinese audiences have not had a high opinion of hybrid Chinese and Western movies. However, the unanimous praise for Turning Red in China se ems to have reversed this situation. In order to verify whether the attitudinal behavior of Chinese audiences toward the film's hybridization of Chinese and Western cultures has changed, this study collected textual materials reflecting the Chinese audience's receptive attitudes toward the film: Douban reviews, short reviews, questionnaires and Mtime.com reviews. Through a grounded study of 664,312 words, a total of 16 initial categories and four main categories were obtained. Finally, a cognitive-emotional-attitudinal mechanism model was formed to explain the audience's receptive behavior process. The study found that Chinese audiences' positive reception of Turning Red comes more from the fact that the film touches on personal emotions and focuses on a series of issues such as growing up, family, and gender, with intergenerational conflict as the core. The audience achieves self-projection and empathy while watching the film, rather than recognizing the Chinese culture presented therein. On this basis, the research further found that the internal structure of the current cultural hybridity has not changed greatly. The reason audiences do not give a high evaluation of cultural hybridity films lies in the lack of conscious distinction between the hybridity culture and the local culture. At the same time, in terms of cross-cultural creation, we should abandon the blind pursuit of cultural symbols, take root in cultural soil and then pay attention to more specific problems. This study reveals that the key factor affecting the audience's receptive behavior toward cultural hybridity films is not necessarily the performance of local culture, which is of great significance for establishing new evaluation criteria.
Collapse
Affiliation(s)
- Rui Chen
- School of Literature and Journalism, Xihua University, Chengdu 610039, China
- Research Institute of International of Economics and Management, Xihua University, Chengdu 610039, China
- Correspondence:
| | - Yi Liu
- School of Literature and Journalism, Xihua University, Chengdu 610039, China
| |
Collapse
|
8
|
The hybrid discrete–dimensional frame method for emotional film selection. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-04038-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
9
|
Leppanen J, Patsalos O, Surguladze S, Kerr-Gaffney J, Williams S, Tchanturia K. Evaluation of film stimuli for the assessment of social-emotional processing: a pilot study. PeerJ 2022; 10:e14160. [PMID: 36444380 PMCID: PMC9700451 DOI: 10.7717/peerj.14160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022] Open
Abstract
Background Difficulties in top-down and bottom-up emotion generation have been proposed to play a key role in the progression of psychiatric disorders. The aim of the current study was to develop more ecologically valid measures of top-down interpretation biases and bottom-up evoked emotional responses. Methods A total of 124 healthy female participants aged 18-25 took part in the study. We evaluated two sets of 18 brief film clips. The first set of film clips presented ambiguous social situations designed to examine interpretation biases. Participants provided written interpretations of each ambiguous film clip which were subjected to sentiment analysis. We compared the films in terms of the valence of participants interpretations. The second set of film clips presented neutral and emotionally provoking social scenarios designed to elicit subjective and facial emotional responses. While viewing these film clips participants mood ratings and facial affect were recorded and analysed using exploratory factor analyses. Results Most of the 18 ambiguous film clips were interpreted in the expected manner while still retaining some ambiguity. However, participants were more attuned to the negative cues in the ambiguous film clips and three film clips were identified as unambiguous. These films clips were deemed unsuitable for assessing interpretation bias. The exploratory factor analyses of participants' mood ratings and evoked facial affect showed that the positive and negative emotionally provoking film clips formed their own factors as expected. However, there was substantial cross-loading of the neutral film clips when participants' facial expression data was analysed. Discussion A subset of the film clips from the two tasks could be used to assess top-down interpretation biases and bottom-up evoked emotional responses. Ambiguous negatively valenced film clips should have more subtle negative cues to avoid ceiling effects and to ensure there is enough room for interpretation.
Collapse
Affiliation(s)
- Jenni Leppanen
- Department of Neuroimaging, King’s College London, University of London, London, United Kingdom
| | - Olivia Patsalos
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
| | - Sophie Surguladze
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
| | - Jess Kerr-Gaffney
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
| | - Steven Williams
- Department of Neuroimaging, King’s College London, University of London, London, United Kingdom
| | - Ketevan Tchanturia
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
- South London and Maudsley NHS Foundation Trust National Eating Disorder Service, London, United Kingdom
- Psychology Department, Illia State University, Tbilisi, Georgia
| |
Collapse
|
10
|
Sainz-de-Baranda Andujar C, Gutiérrez-Martín L, Miranda-Calero JÁ, Blanco-Ruiz M, López-Ongil C. Gender biases in the training methods of affective computing: Redesign and validation of the Self-Assessment Manikin in measuring emotions via audiovisual clips. Front Psychol 2022; 13:955530. [PMID: 36337482 PMCID: PMC9632736 DOI: 10.3389/fpsyg.2022.955530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 08/09/2022] [Indexed: 11/17/2022] Open
Abstract
Audiovisual communication is greatly contributing to the emerging research field of affective computing. The use of audiovisual stimuli within immersive virtual reality environments is providing very intense emotional reactions, which provoke spontaneous physical and physiological changes that can be assimilated into real responses. In order to ensure high-quality recognition, the artificial intelligence (AI) system must be trained with adequate data sets, including not only those gathered by smart sensors but also the tags related to the elicited emotion. Currently, there are very few techniques available for the labeling of emotions. Among them, the Self-Assessment Manikin (SAM) devised by Lang is one of the most popular. This study shows experimentally that the graphic proposal for the original SAM labelling system, as devised by Lang, is not neutral to gender and contains gender biases in its design and representation. Therefore, a new graphic design has been proposed and tested according to the guidelines of expert judges. The results of the experiment show an overall improvement in the labeling of emotions in the pleasure-arousal-dominance (PAD) affective space, particularly, for women. This research proves the relevance of applying the gender perspective in the validation of tools used throughout the years.
Collapse
Affiliation(s)
- Clara Sainz-de-Baranda Andujar
- Department of Communication and Media Studies, Universidad Carlos III de Madrid, Getafe, Madrid, Spain
- Institute on Gender Studies, Universidad Carlos III de Madrid, Getafe, Madrid, Spain
| | | | | | - Marian Blanco-Ruiz
- Institute on Gender Studies, Universidad Carlos III de Madrid, Getafe, Madrid, Spain
- Department of Audiovisual Communication and Advertising, Universidad Rey Juan Carlos, Fuenlabrada, Spain
| | - Celia López-Ongil
- Institute on Gender Studies, Universidad Carlos III de Madrid, Getafe, Madrid, Spain
- Department of Electronic Technology, Universidad Carlos III de Madrid, Leganés, Spain
| |
Collapse
|
11
|
Behnke M, Kreibig SD, Kaczmarek LD, Assink M, Gross JJ. Autonomic Nervous System Activity During Positive Emotions: A Meta-Analytic Review. EMOTION REVIEW 2022. [DOI: 10.1177/17540739211073084] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Autonomic nervous system (ANS) activity is a fundamental component of emotional responding. It is not clear, however, whether positive emotional states are associated with differential ANS reactivity. To address this issue, we conducted a meta-analytic review of 120 articles (686 effect sizes, total N = 6,546), measuring ANS activity during 11 elicited positive emotions, namely amusement, attachment love, awe, contentment, craving, excitement, gratitude, joy, nurturant love, pride, and sexual desire. We identified a widely dispersed collection of studies. Univariate results indicated that positive emotions produce no or weak and highly variable increases in ANS reactivity. However, the limitations of work to date – which we discuss – mean that our conclusions should be treated as empirically grounded hypotheses that future research should validate.
Collapse
Affiliation(s)
- Maciej Behnke
- Faculty of Psychology and Cognitive Science, Adam Mickiewicz University
| | | | | | - Mark Assink
- Research Institute of Child Development and Education, University of Amsterdam
| | | |
Collapse
|
12
|
Persian emotion elicitation film set and signal database. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
13
|
Diconne K, Kountouriotis GK, Paltoglou AE, Parker A, Hostler TJ. Presenting KAPODI – The Searchable Database of Emotional Stimuli Sets. EMOTION REVIEW 2022. [DOI: 10.1177/17540739211072803] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Emotional stimuli such as images, words, or video clips are often used in studies researching emotion. New sets are continuously being published, creating an immense number of available sets and complicating the task for researchers who are looking for suitable stimuli. This paper presents the KAPODI-database of emotional stimuli sets that are freely available or available upon request. Over 45 aspects including over 25 key set characteristics have been extracted and listed for each set. The database facilitates finding of and comparison between individual sets. It currently contains sets published between 1963 and 2020. A searchable online version ( https://airtable.com/shrnVoUZrwu6riP9b ) allows users to select specific set characteristics and to find matching sets accordingly, as well as to add new published sets.
Collapse
Affiliation(s)
- Kathrin Diconne
- Department of Psychology, Manchester Metropolitan University
| | | | | | - Andrew Parker
- Department of Psychology, Manchester Metropolitan University
| | | |
Collapse
|
14
|
Zhou L, Zou T, Zhang L, Lin JM, Zhang YY, Liang ZY. "Carpe Diem?": Disjunction Effect of Incidental Affect on Intertemporal Choice. Front Psychol 2021; 12:782472. [PMID: 34956000 PMCID: PMC8702439 DOI: 10.3389/fpsyg.2021.782472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Incidental affect has an important impact on intertemporal choice (IC). This research aimed to test how positive incidental affect influences IC and its underlying mechanisms. We assumed that positive incidental affect may have a disjunction effect on IC that includes or excludes immediate time. Moreover, we examined the role of time perception for the effect of affect on IC. In Study 1, after undergoing affect priming by video clips, participants completed the IC task using a multiple staircase paradigm. Using Hierarchical Bayesian Modeling, we estimated the discount rate parameter by distinguishing "immediate" and "non-immediate" conditions of IC. The participants' time perception was also measured. In Study 2, apart from the choice preference of IC, we additionally investigated the differences in the participants' attention to delay and reward attributes before decision making. The results of the two studies indicated that positive incidental affect leads to longer time perception (Study 1) and prior and more attention to the delay attribute of IC (Study 2), which leads individuals to prefer immediate options in the IC (Studies 1 and 2). Moreover, there is a disjunction effect of affect; in other words, the incidental affect did not influence IC excluding immediate time (Studies 1 and 2). This study improves our understanding of the disjunctive effect and its mechanism of inducing a positive incidental affect on IC and thus provides a new perspective on how related decision making can be improved.
Collapse
Affiliation(s)
- Lei Zhou
- School of Management, Guangdong University of Technology, Guangzhou, China
| | - Tong Zou
- School of Management, Guangdong University of Technology, Guangzhou, China
| | - Lei Zhang
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Jiao-Min Lin
- School of Management, Guangdong University of Technology, Guangzhou, China
| | - Yang-Yang Zhang
- School of Psychology, Shaanxi Normal University, Xi’an, China
| | - Zhu-Yuan Liang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
La Malva P, Ceccato I, Di Crosta A, Marin A, Fasolo M, Palumbo R, Mammarella N, Palumbo R, Di Domenico A. Updating the Chieti Affective Action Videos database with older adults. Sci Data 2021; 8:272. [PMID: 34671064 PMCID: PMC8528804 DOI: 10.1038/s41597-021-01053-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 09/06/2021] [Indexed: 11/19/2022] Open
Abstract
Validation of the Chieti Affective Action Videos (CAAV) database was replicated with a sample of older adults (age range 65-93). When designing experimental studies of emotions, it is crucial to take into consideration the differences in emotional processing between young and older adults. Therefore, the main goal of the present study was to provide an appropriate dataset for the use of CAAV in aging research. For this reason, the CAAV administration and the data collection methodology was faithfully replicated in a sample of 302 older adults. All the 360 standardized stimuli were evaluated on the emotional dimensions of valence and arousal. The CAAV validation in an older adults' population increases the potential use of this innovative tool. The present validation supports the use of the CAAV database in future experimental studies on cognitive functions in healthy and pathological aging.
Collapse
Affiliation(s)
- Pasquale La Malva
- Department of Psychological, Health and Territorial Sciences (DiSPUTer), University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
| | - Irene Ceccato
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
- Behavioral Economics and Neuroeconomics, Center of Advanced Studies and Technology (CAST), G. d'Annunzio University of Chieti-Pescara, Chieti, 66100, Italy
| | - Adolfo Di Crosta
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
| | - Anna Marin
- Department of Neurology, Boston University, 150 South Huntington Avenue, Boston, MA, 02130, USA
| | - Mirco Fasolo
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
| | - Riccardo Palumbo
- Department of Neuroscience, Imaging and Clinical Science, University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
- Behavioral Economics and Neuroeconomics, Center of Advanced Studies and Technology (CAST), G. d'Annunzio University of Chieti-Pescara, Chieti, 66100, Italy
| | - Nicola Mammarella
- Department of Psychological, Health and Territorial Sciences (DiSPUTer), University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
| | - Rocco Palumbo
- Department of Psychological, Health and Territorial Sciences (DiSPUTer), University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy.
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences (DiSPUTer), University G. d'Annunzio - Via dei Vestini, 31 - 66100, Chieti, Italy
| |
Collapse
|
16
|
Kang J, Kang S, Jeong E, Kim EH. Age and Cultural Differences in Recognitions of Emotions from Masked Faces among Koreans and Americans. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph181910555. [PMID: 34639857 PMCID: PMC8507777 DOI: 10.3390/ijerph181910555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 09/27/2021] [Accepted: 10/04/2021] [Indexed: 11/16/2022]
Abstract
This study investigates age and cultural differences in the negative effects of senders’ wearing masks on receivers’ readabilities of senders’ facially expressed emotions in interpersonal interactions. An online experiment was thus conducted with Koreans and Americans aged over 20 years. Based on sampling quotas by nationality, age group and gender, Korean (n = 240) and American (n = 273) participants were recruited from panel members of a Korean research company and Amazon’s Mechanical Turk via email and the website, respectively. The participants played receiver roles to infer senders’ facially expressed emotions presented in photos in the experiment. They judged emotions facially expressed by the senders without masks and with masks are shown in photos. The results revealed that the senders’ wearing masks reduced the readabilities of the senders’ facially expressed anger among participants aged 30–49 years more than among participants aged 20–29 years. The senders’ wearing masks decreased the readabilities of the senders’ facially expressed fear for participants in their 50’s more than for participants in their 20’s. When the senders wore masks, the readabilities of the senders’ facially expressed happiness dropped among participants aged over 60 years more than among participants aged 20–49 years. When senders wore masks, American participants’ readabilities of disgust, fear, sadness and happiness expressed in the senders’ faces declined more than Korean participants’ readabilities of those emotions. The implications and limitations of these findings are discussed.
Collapse
|
17
|
Long F, Zhao S, Wei X, Ng SC, Ni X, Chi A, Fang P, Zeng W, Wei B. Positive and Negative Emotion Classification Based on Multi-channel. Front Behav Neurosci 2021; 15:720451. [PMID: 34512288 PMCID: PMC8428531 DOI: 10.3389/fnbeh.2021.720451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 07/29/2021] [Indexed: 11/13/2022] Open
Abstract
The EEG features of different emotions were extracted based on multi-channel and forehead channels in this study. The EEG signals of 26 subjects were collected by the emotional video evoked method. The results show that the energy ratio and differential entropy of the frequency band can be used to classify positive and negative emotions effectively, and the best effect can be achieved by using an SVM classifier. When only the forehead and forehead signals are used, the highest classification accuracy can reach 66%. When the data of all channels are used, the highest accuracy of the model can reach 82%. After channel selection, the best model of this study can be obtained. The accuracy is more than 86%.
Collapse
Affiliation(s)
- Fangfang Long
- Department of Psychology, Nanjing University, Nanjing, China
| | - Shanguang Zhao
- Centre for Sport and Exercise Sciences, University of Malaya, Kuala Lumpur, Malaysia
| | - Xin Wei
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China.,Key & Core Technology Innovation Institute of the Greater Bay Area, Guangdong, China
| | - Siew-Cheok Ng
- Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
| | - Xiaoli Ni
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China
| | - Aiping Chi
- School of Sports, Shaanxi Normal University, Xi'an, China
| | - Peng Fang
- Department of the Psychology of Military Medicine, Air Force Medical University, Xi'an, China
| | - Weigang Zeng
- Key & Core Technology Innovation Institute of the Greater Bay Area, Guangdong, China
| | - Bokun Wei
- Xi'an Middle School of Shaanxi Province, Xi'an, China
| |
Collapse
|
18
|
Jeong D, Han SH, Jeong DY, Kwon K, Choi S. Investigating 4D movie audiences’ emotional responses to motion effects and empathy. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106797] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
19
|
Wang T, Zhao Y, Xu Y, Zhu Z. Comparison of response to Chinese and Western videos of mental-health-related emotions in a representative Chinese sample. PeerJ 2021; 9:e10440. [PMID: 33552708 PMCID: PMC7821762 DOI: 10.7717/peerj.10440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 11/06/2020] [Indexed: 11/20/2022] Open
Abstract
Background Emotion plays an important role in mental health. Studying the relationship between emotion and mental health requires effective emotion-eliciting materials. Most standardized emotional stimuli, however, were based on Western contents and have not been validated in other cultures. The present study compared the emotional response to standard Western videos with videos of Chinese contents in a large representative Chinese sample. The effects of content source (film vs. real-life) and delivery medium (online vs. offline), as well as the effects of demographic factors were investigated. Participants’ depression level was assessed to test the potential use of the videos in mental health research. Methods Top-ranked videos of basic emotions commonly implicated in mental health (happiness, sadness, anger, and fear) were chosen from a widely used Western video database. Twelve corresponding Chinese videos (film or real-life) were selected, with three clips for each emotion. In addition, three Chinese videos of the emotion “awe” were included because of the growing research attention to its role in promoting mental health. A large representative sample were recruited (N = 348) either online or offline and each participant viewed and rated his/her emotional reaction to all videos. Results All Chinese and Western videos effectively elicited target emotions. The intensity of emotional response was generally higher for Chinese videos than for Western videos. Film and real-life videos provided mixed results in terms of the intensity of elicited emotions. There was a small difference in the delivery medium in which one video watched online were rated more intense than being watched in the laboratory. Older adults were more emotional reactive than young people in general, but the latter showed more differentiated response to Chinese versus Western videos. People with higher education levels responded less to happy videos. Finally, emotional reactivity of anger and awe were negatively related to depression level, which was partially consistent with the emotional-context-insensitivity (ECI) hypothesis of depression. Conclusions The results suggest that both Western and Chinese videos could reliably elicit emotion in Chinese people, but videos with local contents were generally more effective. The set of videos can be a useful tool for studying emotion and mental health in the Chinese cultural context.
Collapse
Affiliation(s)
- Ting Wang
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yitong Zhao
- Department of Psychology, Wake Forest University, Winston-Salem, NC, United States of America
| | - Yifeng Xu
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhuoying Zhu
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
20
|
Blanco-Ruiz M, Sainz-de-Baranda C, Gutiérrez-Martín L, Romero-Perales E, López-Ongil C. Emotion Elicitation Under Audiovisual Stimuli Reception: Should Artificial Intelligence Consider the Gender Perspective? INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17228534. [PMID: 33213064 PMCID: PMC7698584 DOI: 10.3390/ijerph17228534] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 11/02/2020] [Accepted: 11/10/2020] [Indexed: 11/16/2022]
Abstract
Identification of emotions triggered by different sourced stimuli can be applied to automatic systems that help, relieve or protect vulnerable groups of population. The selection of the best stimuli allows to train these artificial intelligence-based systems in a more efficient and precise manner in order to discern different risky situations, characterized either by panic or fear emotions, in a clear and accurate way. The presented research study has produced a dataset of audiovisual stimuli (UC3M4Safety database) that triggers a complete range of emotions, with a high level of agreement and with a discrete emotional categorization, as well as quantitative categorization in the Pleasure-Arousal-Dominance Affective space. This database is adequate for the machine learning algorithms contained in these automatic systems. Furthermore, this work analyses the effects of gender in the emotion elicitation under audiovisual stimuli, which can help to better design the final solution. Particularly, the focus is set on emotional responses to audiovisual stimuli reproducing situations experienced by women, such as gender-based violence. A statistical study of gender differences in emotional response was carried out on 1332 participants (811 women and 521 men). The average responses per video is around 84 (SD = 22). Data analysis was carried out with RStudio®.
Collapse
Affiliation(s)
- Marian Blanco-Ruiz
- University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain; (M.B.-R.); (L.G.-M.); (E.R.-P.); (C.L.-O.)
- Department of Communication Sciences and Sociology, Faculty of Communication Sciences, Universidad Rey Juan Carlos, 28943 Fuenlabrada, Spain
| | - Clara Sainz-de-Baranda
- University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain; (M.B.-R.); (L.G.-M.); (E.R.-P.); (C.L.-O.)
- Department of Communication and Media Studies, Faculty of Humanities, Communication and Library and Science, Universidad Carlos III de Madrid, Getafe, 28903 Madrid, Spain
- Correspondence: ; Tel.: +34-916249737
| | - Laura Gutiérrez-Martín
- University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain; (M.B.-R.); (L.G.-M.); (E.R.-P.); (C.L.-O.)
- Electronic Technology Department, School of Engineering. Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain
| | - Elena Romero-Perales
- University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain; (M.B.-R.); (L.G.-M.); (E.R.-P.); (C.L.-O.)
- Electronic Technology Department, School of Engineering. Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain
| | - Celia López-Ongil
- University Institute on Gender Studies, Universidad Carlos III de Madrid, 28903 Getafe, Spain; (M.B.-R.); (L.G.-M.); (E.R.-P.); (C.L.-O.)
- Electronic Technology Department, School of Engineering. Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain
| |
Collapse
|
21
|
Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Collapse
Affiliation(s)
- Alexander Toet
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Jan B. F. van Erp
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Research Group Human Media Interaction, University of Twente, Enschede, 7522 NH, The Netherlands
| |
Collapse
|
22
|
Abstract
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.
Collapse
Affiliation(s)
- Alexander Toet
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, 3584 CS, The Netherlands
| | - Jan B. F. van Erp
- Perceptual and Cognitive Systems, TNO, Soesterberg, 3769DE, The Netherlands
- Research Group Human Media Interaction, University of Twente, Enschede, 7522 NH, The Netherlands
| |
Collapse
|
23
|
Larradet F, Niewiadomski R, Barresi G, Caldwell DG, Mattos LS. Toward Emotion Recognition From Physiological Signals in the Wild: Approaching the Methodological Issues in Real-Life Data Collection. Front Psychol 2020; 11:1111. [PMID: 32760305 PMCID: PMC7374761 DOI: 10.3389/fpsyg.2020.01111] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 04/30/2020] [Indexed: 12/26/2022] Open
Abstract
Emotion, mood, and stress recognition (EMSR) has been studied in laboratory settings for decades. In particular, physiological signals are widely used to detect and classify affective states in lab conditions. However, physiological reactions to emotional stimuli have been found to differ in laboratory and natural settings. Thanks to recent technological progress (e.g., in wearables) the creation of EMSR systems for a large number of consumers during their everyday activities is increasingly possible. Therefore, datasets created in the wild are needed to insure the validity and the exploitability of EMSR models for real-life applications. In this paper, we initially present common techniques used in laboratory settings to induce emotions for the purpose of physiological dataset creation. Next, advantages and challenges of data collection in the wild are discussed. To assess the applicability of existing datasets to real-life applications, we propose a set of categories to guide and compare at a glance different methodologies used by researchers to collect such data. For this purpose, we also introduce a visual tool called Graphical Assessment of Real-life Application-Focused Emotional Dataset (GARAFED). In the last part of the paper, we apply the proposed tool to compare existing physiological datasets for EMSR in the wild and to show possible improvements and future directions of research. We wish for this paper and GARAFED to be used as guidelines for researchers and developers who aim at collecting affect-related data for real-life EMSR-based applications.
Collapse
Affiliation(s)
- Fanny Larradet
- Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Radoslaw Niewiadomski
- Contact Unit, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy
| | - Giacinto Barresi
- Rehab Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | | | | |
Collapse
|
24
|
The Chieti Affective Action Videos database, a resource for the study of emotions in psychology. Sci Data 2020; 7:32. [PMID: 31964894 PMCID: PMC6972777 DOI: 10.1038/s41597-020-0366-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 12/18/2019] [Indexed: 11/08/2022] Open
Abstract
The Chieti Affective Action Videos (CAAV) is a new database designed for the experimental study of emotions in psychology. The main goal of the CAAV is to provide a wide range of standardized stimuli based on two emotional dimensions: valence and arousal. The CAAV is the first database to present emotional stimuli through videos of actions filmed and developed specifically for experimental research. 444 young adults were recruited to evaluate this database, which consisted of a sub-set of 90 actions filmed in four versions, for a total of 360 videos. The four versions differ based on the gender of the main actor (male or female) and in the perspective in which each action was shot (first-person or third-person). CAAV validation procedure highlighted a distribution of different stimuli based on valence and arousal indexes. The material provided by CAAV can be used in future experimental studies investigating the role of emotions, perception, attention, and memory in addition to the study of differences between gender and perspective taking.
Collapse
|
25
|
HRV evidence for the improvement of emotion regulation in university students with depression tendency by working memory training. ACTA PSYCHOLOGICA SINICA 2019. [DOI: 10.3724/sp.j.1041.2019.00648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Abstract
Film clips are widely used in emotion research due to their relatively high ecological validity. Although researchers have established various film clip sets for different cultures, the few that exist related to Chinese culture do not adequately address positive emotions. The main purposes of the present study were to establish a standardised database of Chinese emotional film clips that could elicit more categories of reported positive emotions compared to the existing databases and to expand the available film clips that can be used as neutral materials. Two experiments were conducted to construct the database. In experiment 1, 111 film clips were selected from more than one thousand Chinese movies for preliminary screening. After 315 participants viewed and evaluated these film clips, 39 excerpts were selected for further validation. In experiment 2, 147 participants watched and rated these 39 film clips, as well as another 8 excerpts chosen from the existing databases, to compare their validity. Eventually, 22 film excerpts that successfully evoked three positive emotions (joy, amusement, and tenderness), four negative emotions (moral disgust, anger, fear, and sadness), and neutrality formed the standardised database of Chinese emotional film clips.
Collapse
Affiliation(s)
- Yan Ge
- a CAS Key Laboratory of Behavioral Science, Institute of Psychology , Beijing , People's Republic of China.,b Department of Psychology , University of Chinese Academy of Sciences , Beijing , People's Republic of China
| | - Guozhen Zhao
- a CAS Key Laboratory of Behavioral Science, Institute of Psychology , Beijing , People's Republic of China.,b Department of Psychology , University of Chinese Academy of Sciences , Beijing , People's Republic of China
| | - Yulin Zhang
- a CAS Key Laboratory of Behavioral Science, Institute of Psychology , Beijing , People's Republic of China.,b Department of Psychology , University of Chinese Academy of Sciences , Beijing , People's Republic of China
| | - Rebecca J Houston
- c Department of Psychology , Health and Addictions Research Center, Rochester Institute of Technology , Rochester , NY , USA
| | - Jinjing Song
- a CAS Key Laboratory of Behavioral Science, Institute of Psychology , Beijing , People's Republic of China.,b Department of Psychology , University of Chinese Academy of Sciences , Beijing , People's Republic of China
| |
Collapse
|