1
|
Krumpholz C, Quigley C, Fusani L, Leder H. Vienna Talking Faces (ViTaFa): A multimodal person database with synchronized videos, images, and voices. Behav Res Methods 2024; 56:2923-2940. [PMID: 37950115 PMCID: PMC11133183 DOI: 10.3758/s13428-023-02264-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 11/12/2023]
Abstract
Social perception relies on different sensory channels, including vision and audition, which are specifically important for judgements of appearance. Therefore, to understand multimodal integration in person perception, it is important to study both face and voice in a synchronized form. We introduce the Vienna Talking Faces (ViTaFa) database, a high-quality audiovisual database focused on multimodal research of social perception. ViTaFa includes different stimulus modalities: audiovisual dynamic, visual dynamic, visual static, and auditory dynamic. Stimuli were recorded and edited under highly standardized conditions and were collected from 40 real individuals, and the sample matches typical student samples in psychological research (young individuals aged 18 to 45). Stimuli include sequences of various types of spoken content from each person, including German sentences, words, reading passages, vowels, and language-unrelated pseudo-words. Recordings were made with different emotional expressions (neutral, happy, angry, sad, and flirtatious). ViTaFa is freely accessible for academic non-profit research after signing a confidentiality agreement form via https://osf.io/9jtzx/ and stands out from other databases due to its multimodal format, high quality, and comprehensive quantification of stimulus features and human judgements related to attractiveness. Additionally, over 200 human raters validated emotion expression of the stimuli. In summary, ViTaFa provides a valuable resource for investigating audiovisual signals of social perception.
Collapse
Affiliation(s)
- Christina Krumpholz
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Austria
- Department of Behavioural and Cognitive Biology, University of Vienna, Vienna, Austria
| | - Cliodhna Quigley
- Department of Behavioural and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Leonida Fusani
- Konrad Lorenz Institute of Ethology, University of Veterinary Medicine, Vienna, Austria
- Department of Behavioural and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Helmut Leder
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria.
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria.
| |
Collapse
|
2
|
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023; 55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
Abstract
Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Collapse
Affiliation(s)
| | | | - İlker Duymaz
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey
| | - Bilge Sayim
- SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France
- Institute of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland
| | - Nihan Alp
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.
| |
Collapse
|
3
|
Hosseini Houripasand M, Sabaghypour S, Farkhondeh Tale Navi F, Nazari MA. Time distortions induced by high-arousing emotional compared to low-arousing neutral faces: an event-related potential study. PSYCHOLOGICAL RESEARCH 2023; 87:1836-1847. [PMID: 36607427 DOI: 10.1007/s00426-022-01789-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 12/23/2022] [Indexed: 01/07/2023]
Abstract
Emotions influence our perception of time. Arousal and valence are considered different dimensions of emotions that might interactively affect the perception of time. In the present study, we aimed to investigate the possible time distortions induced by emotional (happy/angry) high-arousing faces compared to neutral, low-arousing faces. Previous works suggested that emotional stimuli enhance the amplitudes of several posterior components, such as Early Posterior Negativity (EPN) and Late Positive Potential (LPP). These components reflect several stages of emotional processing. To this end, we conducted an event-related potential (ERP) study with a temporal bisection task. We hypothesized that the partial dissociation of these ERP components would shed more light on the possible relations of valence and arousal on emotional facial regulation and their consequential effects on behavioral timing. The behavioral results demonstrated a significant effect for emotional stimuli, as happy faces were overestimated relative to angry faces. Our results also indicated higher temporal sensitivity for angry faces. The analyzed components (EPN and LLP) provided further insights into the qualitative differences between stimuli. Finally, the results were interpreted considering the internal clock model and two-stage processing of emotional stimuli.
Collapse
Affiliation(s)
| | - Saied Sabaghypour
- Department of Cognitive Neuroscience, University of Tabriz, Tabriz, Iran
| | | | - Mohammad Ali Nazari
- Department of Neuroscience, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences (IUMS), Hemmat Highway, Tehran, 144961-4535, Iran.
| |
Collapse
|
4
|
Pasqualette L, Klinger S, Kulke L. Development and validation of a natural dynamic facial expression stimulus set. PLoS One 2023; 18:e0287049. [PMID: 37379278 DOI: 10.1371/journal.pone.0287049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 05/26/2023] [Indexed: 06/30/2023] Open
Abstract
Emotion research commonly uses either controlled and standardised pictures or natural video stimuli to measure participants' reactions to emotional content. Natural stimulus materials can be beneficial; however, certain measures such as neuroscientific methods, require temporally and visually controlled stimulus material. The current study aimed to create and validate video stimuli in which a model displays positive, neutral and negative expressions. These stimuli were kept as natural as possible while editing timing and visual features to make them suitable for neuroscientific research (e.g. EEG). The stimuli were successfully controlled regarding their features and the validation studies show that participants reliably classify the displayed expression correctly and perceive it as genuine. In conclusion, we present a motion stimulus set that is perceived as natural and that is suitable for neuroscientific research, as well as a pipeline describing successful editing methods for controlling natural stimuli.
Collapse
Affiliation(s)
- Laura Pasqualette
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- Developmental and Educational Psychology Department, University of Bremen, Bremen, Germany
| | - Sara Klinger
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
| | - Louisa Kulke
- Neurocognitive Developmental Psychology, Psychology Department, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Bavaria, Germany
- Developmental and Educational Psychology Department, University of Bremen, Bremen, Germany
| |
Collapse
|
5
|
Guedes D, Prada M, Garrido MV, Lamy E. The taste & affect music database: Subjective rating norms for a new set of musical stimuli. Behav Res Methods 2023; 55:1121-1140. [PMID: 35581438 DOI: 10.3758/s13428-022-01862-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/15/2022] [Indexed: 11/08/2022]
Abstract
Music is a ubiquitous stimulus known to influence human affect, cognition, and behavior. In the context of eating behavior, music has been associated with food choice, intake and, more recently, taste perception. In the latter case, the literature has reported consistent patterns of association between auditory and gustatory attributes, suggesting that individuals reliably recognize taste attributes in musical stimuli. This study presents subjective norms for a new set of 100 instrumental music stimuli, including basic taste correspondences (sweetness, bitterness, saltiness, sourness), emotions (joy, anger, sadness, fear, surprise), familiarity, valence, and arousal. This stimulus set was evaluated by 329 individuals (83.3% women; Mage = 28.12, SD = 12.14), online (n = 246) and in the lab (n = 83). Each participant evaluated a random subsample of 25 soundtracks and responded to self-report measures of mood and taste preferences, as well as the Goldsmiths Musical Sophistication Index (Gold-MSI). Each soundtrack was evaluated by 68 to 97 participants (Mdn = 83), and descriptive results (means, standard deviations, and confidence intervals) are available as supplemental material at osf.io/2cqa5 . Significant correlations between taste correspondences and emotional/affective dimensions were observed (e.g., between sweetness ratings and pleasant emotions). Sex, age, musical sophistication, and basic taste preferences presented few, small to medium associations with the evaluations of the stimuli. Overall, these results suggest that the new Taste & Affect Music Database is a relevant resource for research and intervention with musical stimuli in the context of crossmodal taste perception and other affective, cognitive, and behavioral domains.
Collapse
Affiliation(s)
- David Guedes
- Iscte - Instituto Universitário de Lisboa, CIS_Iscte, Lisboa, Portugal.
- MED - Mediterranean Institute for Agriculture, Environment and Development & CHANGE - Global Change and Sustainability Institute, University of Évora, Évora, Portugal.
| | - Marília Prada
- Iscte - Instituto Universitário de Lisboa, CIS_Iscte, Lisboa, Portugal
| | | | - Elsa Lamy
- MED - Mediterranean Institute for Agriculture, Environment and Development & CHANGE - Global Change and Sustainability Institute, University of Évora, Évora, Portugal
| |
Collapse
|
6
|
Fabrício DDM, Ferreira BLC, Maximiano-Barreto MA, Muniz M, Chagas MHN. Construction of face databases for tasks to recognize facial expressions of basic emotions: a systematic review. Dement Neuropsychol 2022; 16:388-410. [DOI: 10.1590/1980-5764-dn-2022-0039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 08/01/2022] [Accepted: 08/23/2022] [Indexed: 12/12/2022] Open
Abstract
ABSTRACT. Recognizing the other's emotions is an important skill for the social context that can be modulated by variables such as gender, age, and race. A number of studies seek to elaborate specific face databases to assess the recognition of basic emotions in different contexts. Objectives: This systematic review sought to gather these studies, describing and comparing the methodologies used in their elaboration. Methods: The databases used to select the articles were the following: PubMed, Web of Science, PsycInfo, and Scopus. The following word crossing was used: “Facial expression database OR Stimulus set AND development OR Validation.” Results: A total of 36 articles showed that most of the studies used actors to express the emotions that were elicited from specific situations to generate the most spontaneous emotion possible. The databases were mainly composed of colorful and static stimuli. In addition, most of the studies sought to establish and describe patterns to record the stimuli, such as color of the garments used and background. The psychometric properties of the databases are also described. Conclusions: The data presented in this review point to the methodological heterogeneity among the studies. Nevertheless, we describe their patterns, contributing to the planning of new research studies that seek to create databases for new contexts.
Collapse
Affiliation(s)
| | | | | | - Monalisa Muniz
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil
| | - Marcos Hortes Nisihara Chagas
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil; Universidade de São Paulo, Brazil; Instituto Bairral de Psiquiatria, Brazil
| |
Collapse
|
7
|
García AS, Fernández-Sotos P, González P, Navarro E, Rodriguez-Jimenez R, Fernández-Caballero A. Behavioral intention of mental health practitioners toward the adoption of virtual humans in affect recognition training. Front Psychol 2022; 13:934880. [PMCID: PMC9600723 DOI: 10.3389/fpsyg.2022.934880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
This paper explores the key factors influencing mental health professionals' behavioral intention to adopt virtual humans as a means of affect recognition training. Therapies targeting social cognition deficits are in high demand given that these deficits are related to a loss of functioning and quality of life in several neuropsychiatric conditions such as schizophrenia, autism spectrum disorders, affective disorders, and acquired brain injury. Therefore, developing new therapies would greatly improve the quality of life of this large cohort of patients. A questionnaire based on the second revision of the Unified Theory of Acceptance and Use of Technology (UTAUT2) questionnaire was used for this study. One hundred and twenty-four mental health professionals responded to the questionnaire after viewing a video presentation of the system. The results confirmed that mental health professionals showed a positive intention to use virtual reality tools to train affect recognition, as they allow manipulation of social interaction with patients. Further studies should be conducted with therapists from other countries to reach more conclusions.
Collapse
Affiliation(s)
- Arturo S. García
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
| | - Patricia Fernández-Sotos
- Servicio de Salud Mental, Complejo Hospitalario Universitario de Albacete, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Pascual González
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Elena Navarro
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
| | - Roberto Rodriguez-Jimenez
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
- Cognición y Psicosis, Area de Neurociencias y Salud Mental, Instituto de Investigación Sanitaria Hospital 12 de Octubre (imas12), Madrid, Spain
- CogPsy-Group, Universidad Complutense de Madrid, Madrid, Spain
| | - Antonio Fernández-Caballero
- Unidad Multidisciplinar de Investigación de la Neurocognición y Emoción en Entornos Virtuales y Reales, Instituto de Investigación en Informática de Albacete, Albacete, Spain
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, Albacete, Spain
- Biomedical Research Networking Center in Mental Health (CIBERSAM), Madrid, Spain
- *Correspondence: Antonio Fernández-Caballero
| |
Collapse
|
8
|
Norms for pictures of proper names: contrasting famous people and well-known places in younger and older adults. Behav Res Methods 2022; 55:1244-1258. [PMID: 35622238 DOI: 10.3758/s13428-022-01823-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/04/2022] [Indexed: 11/08/2022]
Abstract
Proper names comprise a class of labels that arbitrarily nominate specific entities, such as people and places. Compared to common nouns, retrieving proper names is more challenging. Thus, they constitute good alternative semantic categories for psycholinguistic and neurocognitive research and intervention. The ability to retrieve proper names is known to decrease with aging. Likewise, their retrieval may differ across their different categories (e.g., people and places) given their specific associated knowledge. Therefore, proper names' stimuli require careful selection due to their high dependence on prior experiences. Notably, normative datasets for pictures of proper names are scarce and hardly have considered the influence of aging and categories. The current study established culturally adapted norms for proper names' pictures (N = 80) from an adult sample (N = 107), in psycholinguistic measures (naming and categorization scores) and evaluative dimensions (fame, familiarity, distinctiveness, arousal, and representational quality). These norms were contrasted across different categories (famous people and well-known places) and age groups (younger and older adults). Additionally, the correlations between all variables were examined. Proper names' pictures were named and categorized above chance and overall rated as familiar, famous, distinctive, and of high representational quality. Age effects were observed across all variables, except familiarity. Category effects were occasionally observed. Finally, the correlations between the psycholinguistic measures and all rated dimensions suggest the relevance of controlling for these dimensions when assessing naming abilities. The current norms provide a relevant aging-adapted dataset that is publicly available for research and intervention purposes.
Collapse
|
9
|
Heffer N, Karl A, Jicol C, Ashwin C, Petrini K. Anxiety biases audiovisual processing of social signals. Behav Brain Res 2021; 410:113346. [PMID: 33964354 DOI: 10.1016/j.bbr.2021.113346] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 04/20/2021] [Accepted: 05/03/2021] [Indexed: 02/08/2023]
Abstract
In everyday life, information from multiple senses is integrated for a holistic understanding of emotion. Despite evidence of atypical multisensory perception in populations with socio-emotional difficulties (e.g., autistic individuals), little research to date has examined how anxiety impacts on multisensory emotion perception. Here we examined whether the level of trait anxiety in a sample of 56 healthy adults affected audiovisual processing of emotion for three types of stimuli: dynamic faces and voices, body motion and dialogues of two interacting agents, and circles and tones. Participants judged emotion from four types of displays - audio-only, visual-only, audiovisual congruent (e.g., angry face and angry voice) and audiovisual incongruent (e.g., angry face and happy voice) - as happy or angry, as quickly as possible. In one task, participants based their emotional judgements on information in one modality while ignoring information in the other, and in a second task they based their judgements on their overall impressions of the stimuli. The results showed that the higher trait anxiety group prioritized the processing of angry cues when combining faces and voices that portrayed conflicting emotions. Individuals in this group were also more likely to benefit from combining congruent face and voice cues when recognizing anger. The multisensory effects of anxiety were found to be independent of the effects of autistic traits. The observed effects of trait anxiety on multisensory processing of emotion may serve to maintain anxiety by increasing sensitivity to social-threat and thus contributing to interpersonal difficulties.
Collapse
Affiliation(s)
- Naomi Heffer
- University of Bath, Department of Psychology, United Kingdom.
| | - Anke Karl
- University of Exeter, Mood Disorders Centre, United Kingdom
| | - Crescent Jicol
- University of Bath, Department of Psychology, United Kingdom
| | - Chris Ashwin
- University of Bath, Department of Psychology, United Kingdom
| | - Karin Petrini
- University of Bath, Department of Psychology, United Kingdom
| |
Collapse
|
10
|
Camilo C, Vaz Garrido M, Calheiros MM. Recognizing children's emotions in child abuse and neglect. Aggress Behav 2021; 47:161-172. [PMID: 33164223 DOI: 10.1002/ab.21935] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Revised: 08/16/2020] [Accepted: 10/28/2020] [Indexed: 11/11/2022]
Abstract
Past research has suggested that parents' ability to recognize their children's emotions is associated with an enhanced quality of parent-child interactions and appropriateness of parental caregiving behavior. Although this association has also been examined in abusive and neglectful parents, the results are mixed and do not adequately address child neglect. Based on the Social Information Processing model of child abuse and neglect, we examined the association between mothers' ability to recognize children's emotions and self- and professionals-reported child abuse and neglect. The ability to recognize children's emotions was assessed with an implicit valence classification task and an emotion labeling task. A convenience sample of 166 mothers (78 with at least one child referred to Child Protection Services) completed the tasks. Child abuse and neglect were measured with self-report and professionals-report instruments. The moderating role of mothers' intellectual functioning and socioeconomic status were also examined. Results revealed that abusive mothers performed more poorly on the negative emotions recognition task, while neglectful mothers demonstrated a lower overall ability in recognizing children's emotions. When classifying the valence of emotions, mothers who obtained higher scores on child neglect presented a higher positivity bias particularly when their scores in measures of intellectual functioning were low. There was no moderation effect for socioeconomic status. Moreover, the results for child abuse were mainly observed with self-report measures, while for child neglect, they predominantly emerged with professionals-report. Our findings highlight the important contribution of the social information processing model in the context of child maltreatment, with implications for prevention and intervention addressed.
Collapse
Affiliation(s)
| | | | - Maria Manuela Calheiros
- Iscte–Instituto Universitário de Lisboa Lisboa Portugal
- Faculdade de Psicologia, CICPSI Universidade de Lisboa Lisboa Portugal
| |
Collapse
|
11
|
Abstract
Pictures are often used as stimuli in several fields, such as psychology and neuroscience. However, co-occurring image-related properties might impact their processing, emphasizing the importance of validating such materials to guarantee the quality of research and professional practices. This is particularly relevant for pictures of common items because of their wide applicability potential. Normative studies have already been conducted to create and validate such pictures, yet most of them focused on stimulus without naturalistic elements (e.g., line drawings). Norms for real-world pictures of common items are rare, and their normative examination does not always simultaneously assess affective, semantic and perceptive dimensions, namely in the Portuguese context. Real-world pictures constitute pictorial representations of the world with realistic details (e.g., natural color or position), thus improving their ecological validity and their suitability for empirical studies or intervention purposes. Consequently, the establishment of norms for real-world pictures is mandatory for exploring their ecological richness and to uncover their impact across several relevant dimensions. In this study, we established norms for 596 real-world pictures of common items (e.g., tomato, drum) selected from existing databases and distributed into 12 categories. The pictures were evaluated on nine dimensions by a Portuguese sample. The results present the norms by item, by dimension and their correlations as well as cross-cultural analyses. RealPic is a culturally based dataset that offers systematic and flexible standards and is suitable for selecting stimuli while controlling for confounding effects in empirical tasks and interventional applications.
Collapse
|
12
|
Souza C, Garrido MV, Carmo JC. A Systematic Review of Normative Studies Using Images of Common Objects. Front Psychol 2021; 11:573314. [PMID: 33424684 PMCID: PMC7793811 DOI: 10.3389/fpsyg.2020.573314] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 11/04/2020] [Indexed: 11/29/2022] Open
Abstract
Common objects comprise living and non-living things people interact with in their daily-lives. Images depicting common objects are extensively used in different fields of research and intervention, such as linguistics, psychology, and education. Nevertheless, their adequate use requires the consideration of several factors (e.g., item-differences, cultural-context and confounding correlated variables), and careful validation procedures. The current study presents a systematic review of the available published norms for images of common objects. A systematic search using PRISMA guidelines indicated that despite their extensive use, the production of norms for such stimuli with adult populations is quite limited (N = 55), particularly for more ecological images, such as photos (N = 14). Among the several dimensions in which the items were assessed, the most commonly referred in our sample were familiarity, visual complexity and name agreement, illustrating some consistency across the reported dimensions while also indicating the limited examination of other potentially relevant dimensions for image processing. The lack of normative studies simultaneously examining affective, perceptive and semantic dimensions was also documented. The number of such normative studies has been increasing in the last years and published in relevant peer-reviewed journals. Moreover, their datasets and norms have been complying with current open science practices. Nevertheless, they are still scarcely cited and replicated in different linguistic and cultural contexts. The current study brings important theoretical contributions by characterizing images of common objects stimuli and their culturally-based norms while highlighting several important features that are likely to be relevant for future stimuli selection and evaluative procedures. The systematic scrutiny of these normative studies is likely to stimulate the production of new, robust and contextually-relevant normative datasets and to provide tools for enhancing the quality of future research and intervention.
Collapse
Affiliation(s)
- Cristiane Souza
- Iscte-Instituto Universitário de Lisboa, Cis-IUL, Lisbon, Portugal
| | | | - Joana C Carmo
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| |
Collapse
|
13
|
Bowdring MA, Sayette MA, Girard JM, Woods WC. In the Eye of the Beholder: A Comprehensive Analysis of Stimulus Type, Perceiver, and Target in Physical Attractiveness Perceptions. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-020-00350-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Dados normativos de um conjunto de faces do Karolinska Directed Emotional Faces em uma amostra brasileira. PSICO 2020. [DOI: 10.15448/1980-8623.2020.3.34083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
O objetivo desta pesquisa foi obter dados normativos de um conjunto de faces do Karolinska Directed Emotional Faces (KDEF) em uma amostra brasileira. Para isso foi utilizada uma amostra não probabilística (por conveniência) de 100 participantes da cidade de João Pessoa-PB. Esses tinham idades entre 18 e 62 anos (M=21,6; DP=6,2), a maioria do sexo feminino (76%). Os resultados mostraram que os participantes obtiveram um percentual de acerto médio de 76,2%, de modo que expressões de Alegria (94.7%) e Surpresa (90.3%) foram as emoções mais facilmente identificáveis e Medo (40.65%) a mais difícil. Em relação às medidas de intensidade e valência, Nojo seguida de Surpresa obtiveram classificações mais intensas, e Alegria foi a única emoção com valência positiva alta. Esses achados foram bastante similares com àqueles relatados em pesquisas anteriores, fornecendo normas subjetivas de classificação mais adequadas às características da população brasileira.
Collapse
|
15
|
Mendonça R, Garrido MV, Semin GR. Social Inferences From Faces as a Function of the Left-to-Right Movement Continuum. Front Psychol 2020; 11:1488. [PMID: 32765346 PMCID: PMC7378970 DOI: 10.3389/fpsyg.2020.01488] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Accepted: 06/04/2020] [Indexed: 01/21/2023] Open
Abstract
We examined whether reading and writing habits known to drive agency perception also shape the attribution of other agency-related traits, particularly for faces oriented congruently with script direction (i.e., left-to-right). Participants rated front-oriented, left-oriented and right-oriented faces on 14 dimensions. These ratings were first reduced to two dimensions, which were further confirmed with a new sample: power and social-warmth. Both dimensions were systematically affected by head orientation. Right-oriented faces generated a stronger endorsement of the power dimension (e.g., agency, dominance), and, to a lesser extent, of the social-warmth dimension, relative to the left and frontal-oriented faces. A further interaction between the head orientation of the faces and their gender revealed that front-facing females, relative to front-facing males, were attributed higher social-warmth scores, or communal traits (e.g., valence, warmth). These results carry implications for the representation of people in space particularly in marketing and political contexts. Face stimuli and respective norming data are available at www.osf.io/v5jpd.
Collapse
Affiliation(s)
- Rita Mendonça
- William James Center for Research, ISPA - Instituto Universitário, Lisbon, Portugal
| | - Margarida V Garrido
- ISCTE - Instituto Universitário de Lisboa, Centro de Investigação e Intervenção Social, Lisbon, Portugal
| | - Gün R Semin
- William James Center for Research, ISPA - Instituto Universitário, Lisbon, Portugal.,Faculty of Social and Behavioral Sciences, Utrecht University, Utrecht, Netherlands
| |
Collapse
|
16
|
Chung KM, Kim S, Jung WH, Kim Y. Development and Validation of the Yonsei Face Database (YFace DB). Front Psychol 2019; 10:2626. [PMID: 31849755 PMCID: PMC6901828 DOI: 10.3389/fpsyg.2019.02626] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 11/07/2019] [Indexed: 12/13/2022] Open
Abstract
The purposes of this study were to develop the Yonsei Face Database (YFace DB), consisting of both static and dynamic face stimuli for six basic emotions (happiness, sadness, anger, surprise, fear, and disgust), and to test its validity. The database includes selected pictures (static stimuli) and film clips (dynamic stimuli) of 74 models (50% female) aged between 19 and 40. Thousand four hundred and eighty selected pictures and film clips were assessed for the accuracy, intensity, and naturalness during the validation procedure by 221 undergraduate students. The overall accuracy of the pictures was 76%. Film clips had a higher accuracy, of 83%; the highest accuracy was observed in happiness and the lowest in fear across all conditions (static with mouth open or closed, or dynamic). The accuracy was higher in film clips across all emotions but happiness and disgust, while the naturalness was higher in the pictures than in film clips except for sadness and anger. The intensity varied the most across conditions and emotions. Significant gender effects were found in perception accuracy for both the gender of models and raters. Male raters perceived surprise more accurately in static stimuli with mouth open and in dynamic stimuli while female raters perceived fear more accurately in all conditions. Moreover, sadness and anger expressed in static stimuli with mouth open and fear expressed in dynamic stimuli were perceived more accurately when models were male. Disgust expressed in static stimuli with mouth open and dynamic stimuli, and fear expressed in static stimuli with mouth closed were perceived more accurately when models were female. The YFace DB is the largest Asian face database by far and the first to include both static and dynamic facial expression stimuli, and the current study can provide researchers with a wealth of information about the validity of each stimulus through the validation procedure.
Collapse
Affiliation(s)
- Kyong-Mee Chung
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Soojin Kim
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Woo Hyun Jung
- Department of Psychology, Chungbuk National University, Cheongju, South Korea
| | - Yeunjoo Kim
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
17
|
Animal Images Database: Validation of 120 Images for Human-Animal Studies. Animals (Basel) 2019; 9:ani9080475. [PMID: 31344828 PMCID: PMC6727086 DOI: 10.3390/ani9080475] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 07/18/2019] [Accepted: 07/20/2019] [Indexed: 11/17/2022] Open
Abstract
Simple Summary With the general goal of increasing knowledge about how individuals perceive and evaluate different animals, we provide normative data on an extensive set of open-source animal images, spanning a total of 12 biological categories (e.g., mammals, insects, reptiles, arachnids), on 11 evaluative dimensions (e.g., valence, cuteness, capacity to think, acceptability to kill for human consumption). We found that animal evaluations were affected by individual characteristics of the perceiver, particularly gender, diet and companion animal ownership. Moral attitudes towards animals were predominantly predicted by ratings of cuteness, edibility, capacity to feel and familiarity. We hope this free resource may help advance research into the many different ways we relate to animals. Abstract There has been increasing interest in the study of human-animal relations. This contrasts with the lack of normative resources and materials for research purposes. We present subjective norms for a set of 120 open-source colour images of animals spanning a total of 12 biological categories (e.g., mammals, insects, reptiles, arachnids). Participants (N = 509, 55.2% female, MAge = 28.05, SD = 9.84) were asked to evaluate a randomly selected sub-set of 12 animals on valence, arousal, familiarity, cuteness, dangerousness, edibility, similarity to humans, capacity to think, capacity to feel, acceptability to kill for human consumption and feelings of care and protection. Animal evaluations were affected by individual characteristics of the perceiver, particularly gender, diet and companion animal ownership. Moral attitudes towards animals were predominantly predicted by ratings of cuteness, edibility, capacity to feel and familiarity. The Animal Images Database (Animal.ID) is the largest open-source database of rated images of animals; the stimuli set and item-level data are freely available online.
Collapse
|
18
|
Prada M, Garrido MV, Camilo C, Rodrigues DL. Subjective ratings and emotional recognition of children's facial expressions from the CAFE set. PLoS One 2018; 13:e0209644. [PMID: 30589868 PMCID: PMC6307702 DOI: 10.1371/journal.pone.0209644] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 12/10/2018] [Indexed: 12/02/2022] Open
Abstract
Access to validated stimuli depicting children's facial expressions is useful for different research domains (e.g., developmental, cognitive or social psychology). Yet, such databases are scarce in comparison to others portraying adult models, and validation procedures are typically restricted to emotional recognition accuracy. This work presents subjective ratings for a sub-set of 283 photographs selected from the Child Affective Facial Expression set (CAFE [1]). Extending beyond the original emotion recognition accuracy norms [2], our main goal was to validate this database across eight subjective dimensions related to the model (e.g., attractiveness, familiarity) or the specific facial expression (e.g., intensity, genuineness), using a sample from a different nationality (N = 450 Portuguese participants). We also assessed emotion recognition (forced-choice task with seven options: anger, disgust, fear, happiness, sadness, surprise and neutral). Overall results show that most photographs were rated as highly clear, genuine and intense facial expressions. The models were rated as both moderately familiar and likely to belong to the in-group, obtaining high attractiveness and arousal ratings. Results also showed that, similarly to the original study, the facial expressions were accurately recognized. Normative and raw data are available as supplementary material at https://osf.io/mjqfx/.
Collapse
Affiliation(s)
- Marília Prada
- Department of Social and Organizational Psychology, Instituto Universitário de Lisboa (ISCTE-IUL), CIS - IUL, Lisboa, Portugal
| | - Margarida V. Garrido
- Department of Social and Organizational Psychology, Instituto Universitário de Lisboa (ISCTE-IUL), CIS - IUL, Lisboa, Portugal
| | - Cláudia Camilo
- Department of Social and Organizational Psychology, Instituto Universitário de Lisboa (ISCTE-IUL), CIS - IUL, Lisboa, Portugal
| | - David L. Rodrigues
- Department of Social and Organizational Psychology, Instituto Universitário de Lisboa (ISCTE-IUL), CIS - IUL, Lisboa, Portugal
| |
Collapse
|
19
|
Donadon MF, Martin-Santos R, Osório FL. Baby Faces: Development and psychometric study of a stimuli set based on babies' emotions. J Neurosci Methods 2018; 311:178-185. [PMID: 30347221 DOI: 10.1016/j.jneumeth.2018.10.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 10/05/2018] [Accepted: 10/15/2018] [Indexed: 10/28/2022]
Abstract
BACKGROUND Sets of stimuli from babies' facial emotions provide a good instrument to detect the recognition of facial emotion (RFE) in clinical and non clinical groups. However, specificities from the stimuli have not been widely explored and validated by previous studies. NEW METHOD We presented a new set of facial stimuli from infants aged 6-12 months, both sexes, different races, representing five basic emotions. We also present the psychometric properties of validity/reliability for each stimulus and assess whether the sociodemographic characteristics of the stimuli and the subjects affect the RFE. RESULTS The stimuli were obtained by a standardized protocol of activities to elicit emotions and 72 stimuli were developed. A total of 119 subjects from the community were selected for the psychometric analysis of the stimuli. The set produced indicators of validity (mean 62.5%) and reliability. Stimuli were evaluated using the Rash model and 15 stimuli had indicators of unpredictability and unmodeled residuals. The difficulty index of each stimulus was calculated, evidencing that the set was normally distributed. COMPARISON WITH EXISTING METHOD Previously published methods are limited in terms of racial diversity, standardisation of the elicitation of emotions, procedure of stimuli extraction, and psychometric evidence. CONCLUSIONS The findings reinforced the Differential Emotion Theory regarding the expression of basic emotions in infants and evidenced the effect of education level on emotion recognition to the detriment of other sociocultural characteristics (sex and race). This set is freely accessible by email request.
Collapse
Affiliation(s)
- Mariana Fortunata Donadon
- Department of Neuroscience and Behavior, Medical School of Ribeirão Preto, University of São Paulo, Brazil.
| | - Rocio Martin-Santos
- Hospital Clínic, IDIBAPS, CIBERSAM, Spain; Department of Medicine, Universidad of Barceloma, Barcelona, Spain; National Institute for Science and Technology (INCT-TM, CNPq, Brazil), Brazil
| | - Flávia L Osório
- Department of Neuroscience and Behavior, Medical School of Ribeirão Preto, University of São Paulo, Brazil; Hospital Clínic, IDIBAPS, CIBERSAM, Spain; Department of Medicine, Universidad of Barceloma, Barcelona, Spain.
| |
Collapse
|
20
|
Prada M, Rodrigues DL, Garrido MV, Lopes D, Cavalheiro B, Gaspar R. Motives, frequency and attitudes toward emoji and emoticon use. TELEMATICS AND INFORMATICS 2018. [DOI: 10.1016/j.tele.2018.06.005] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
21
|
Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018; 13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 175] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
Collapse
Affiliation(s)
- Steven R. Livingstone
- Department of Psychology, Ryerson University, Toronto, Canada
- Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
22
|
Garrido MV, Prada M. KDEF-PT: Valence, Emotional Intensity, Familiarity and Attractiveness Ratings of Angry, Neutral, and Happy Faces. Front Psychol 2017; 8:2181. [PMID: 29312053 PMCID: PMC5742208 DOI: 10.3389/fpsyg.2017.02181] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 11/30/2017] [Indexed: 12/21/2022] Open
Abstract
The Karolinska Directed Emotional Faces (KDEF) is one of the most widely used human facial expressions database. Almost a decade after the original validation study (Goeleven et al., 2008), we present subjective rating norms for a sub-set of 210 pictures which depict 70 models (half female) each displaying an angry, happy and neutral facial expressions. Our main goals were to provide an additional and updated validation to this database, using a sample from a different nationality (N = 155 Portuguese students, M = 23.73 years old, SD = 7.24) and to extend the number of subjective dimensions used to evaluate each image. Specifically, participants reported emotional labeling (forced-choice task) and evaluated the emotional intensity and valence of the expression, as well as the attractiveness and familiarity of the model (7-points rating scales). Overall, results show that happy faces obtained the highest ratings across evaluative dimensions and emotion labeling accuracy. Female (vs. male) models were perceived as more attractive, familiar and positive. The sex of the model also moderated the accuracy of emotional labeling and ratings of different facial expressions. Each picture of the set was categorized as low, moderate, or high for each dimension. Normative data for each stimulus (hits proportion, means, standard deviations, and confidence intervals per evaluative dimension) is available as supplementary material (available at https://osf.io/fvc4m/).
Collapse
Affiliation(s)
| | - Marília Prada
- Instituto Universitário de Lisboa (ISCTE-IUL), CIS - IUL, Lisboa, Portugal
| |
Collapse
|
23
|
Prada M, Rodrigues D, Garrido MV, Lopes J. Food-pics-PT: Portuguese validation of food images in 10 subjective evaluative dimensions. Food Qual Prefer 2017. [DOI: 10.1016/j.foodqual.2017.04.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
24
|
Lisbon Emoji and Emoticon Database (LEED): Norms for emoji and emoticons in seven evaluative dimensions. Behav Res Methods 2017; 50:392-405. [DOI: 10.3758/s13428-017-0878-6] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|