1
|
Peng S, Liu T, Yang Y. Face age modulates face ensemble coding. Vision Res 2025; 228:108549. [PMID: 39864130 DOI: 10.1016/j.visres.2025.108549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2024] [Revised: 01/18/2025] [Accepted: 01/18/2025] [Indexed: 01/28/2025]
Abstract
Research has demonstrated that humans possess the remarkable ability to swiftly extract ensemble statistics, specifically the average identity, from sets of stimuli, such as facial crowds. This phenomenon is known as ensemble perception. Although previous studies have investigated how physiognomic features like gender and race influence face ensemble perception, the impact of face age on face ensemble coding performance remains a relatively unexplored area. Here, we demonstrated ensemble coding of multiple faces in terms of an average face was impacted by face age. In both Experiment 1 and 2, adult participants viewed sets of four faces that were of either own-age or other-age and then judged whether the subsequently presented probe face was present or not in the preceding set. The other-age faces were manipulated as older faces in Experiment 1 and baby faces in Experiment 2. The results suggested participants incorrectly endorsed a morphed set average to be the member of the set, pointing to face ensemble coding ability. Furthermore, the results of Experiment 1 revealed adult participants displayed an own-age superiority when other-age faces were manipulated as older faces, however, the results of Experiment 2 found when other-age faces were manipulated as baby faces, participants displayed stronger visual averaging tendency towards other-age faces rather own-age faces, showing a babyface effect. Together, the present research provided initial evidence that face ensemble coding performance was modulated by face age.
Collapse
Affiliation(s)
- Shenli Peng
- Department of Psychology, College of Education, Hunan Agricultural University.
| | - Tianhui Liu
- Department of Psychology, College of Education, Hunan Agricultural University
| | - Yi Yang
- Department of Psychology, College of Education, Hunan Agricultural University
| |
Collapse
|
2
|
Martin D, Bottomley E, Hutchison J, Konopka AE, Williamson G, Swainson R. Social Category Modulation of the Happy Face Advantage. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2025:1461672241310917. [PMID: 39829318 DOI: 10.1177/01461672241310917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2025]
Abstract
The size of the happy face advantage-faster categorization of happy faces-is modulated by interactions between perceiver and target social categories, with reliable happy face advantages for ingroups but not necessarily outgroups. The current understanding of this phenomenon is constrained by the limited social categories typically used in experiments. To better understand the mechanism(s) underpinning social category modulation of the happy face advantage, we used racially more diverse samples of perceivers and target faces and manipulated the intergroup context in which they appeared. We found evidence of ingroup bias, with perceivers often showing a larger happy face advantage for ingroups than outgroups (Experiments 1-2). We also found evidence of majority/minority group bias, with perceivers showing a larger happy face advantage for majority outgroups than minority outgroups (Experiments 2-3c). These findings suggest social category modulation of the happy face advantage is a dynamic context-dependent process.
Collapse
|
3
|
Gu B, Sun X, Beltrán D, de Vega M. Faces of different socio-cultural identities impact emotional meaning learning for L2 words. Sci Rep 2025; 15:616. [PMID: 39753658 PMCID: PMC11699134 DOI: 10.1038/s41598-024-84347-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 12/23/2024] [Indexed: 01/06/2025] Open
Abstract
This study investigated how exposure to Caucasian and Chinese faces influences native Mandarin-Chinese speakers' learning of emotional meanings for English L2 words. Participants were presented with English pseudowords repeatedly paired with either Caucasian faces or Chinese faces showing emotions of disgust, sadness, or neutrality as a control baseline. Participants' learning was evaluated through both within-modality (i.e., testing participants with new sets of faces) and cross-modality (i.e., testing participants with sentences expressing the learned emotions) generalization tests. When matching newly learned L2 words with new faces, participants from both groups were more accurate under the neutral condition compared to sad condition. The advantage of neutrality extended to sentences as participants matched newly learned L2 words with neutral sentences more accurately than with both disgusting and sad ones. Differences between the two groups were also found in the cross-modality generalization test in which the Caucasian-face Group outperformed the Chinese-face Group in terms of accuracy in sad trials. However, the Chinese-face Group was more accurate in neutral trials in the same test. We thus conclude that faces of diverse socio-cultural identities exert different impacts on the emotional meaning learning for L2 words.
Collapse
Affiliation(s)
- Beixian Gu
- School of Foreign Languages, Institute for Language and Cognition, Dalian University of Technology, Dalian, China.
| | - Xiaobing Sun
- National Research Centre for Foreign Language Education, Beijing Foreign Studies University, Beijing, China.
| | - David Beltrán
- Psychology Department, Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, La Laguna, Spain
| | - Manuel de Vega
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, La Laguna, Spain
| |
Collapse
|
4
|
Luo T, Xu M, Zheng Z, Okazawa G. Limitation of switching sensory information flow in flexible perceptual decision making. Nat Commun 2025; 16:172. [PMID: 39747100 PMCID: PMC11696174 DOI: 10.1038/s41467-024-55686-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 12/19/2024] [Indexed: 01/04/2025] Open
Abstract
Humans can flexibly change rules to categorize sensory stimuli, but their performance degrades immediately after a task switch. This switch cost is believed to reflect a limitation in cognitive control, although the bottlenecks remain controversial. Here, we show that humans exhibit a brief reduction in the efficiency of using sensory inputs to form a decision after a rule change. Participants classified face stimuli based on one of two rules, switching every few trials. Psychophysical reverse correlation and computational modeling reveal a reduction in sensory weighting, which recovers within a few hundred milliseconds after stimulus presentation. This reduction depends on the sensory features being switched, suggesting a constraint in routing the sensory information flow. We propose that decision-making circuits cannot fully adjust their sensory readout based on a context cue alone, but require the presence of an actual stimulus to tune it, leading to a limitation in flexible perceptual decision making.
Collapse
Affiliation(s)
- Tianlin Luo
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Mengya Xu
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Zhihao Zheng
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
| | - Gouki Okazawa
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-Inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
5
|
Liu X, He D, Zhu M, Li Y, Lin L, Cai Q. Hemispheric dominance in reading system alters contribution to face processing lateralization across development. Dev Cogn Neurosci 2024; 69:101418. [PMID: 39059053 PMCID: PMC11331717 DOI: 10.1016/j.dcn.2024.101418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 07/07/2024] [Accepted: 07/21/2024] [Indexed: 07/28/2024] Open
Abstract
Face processing dominates the right hemisphere. This lateralization can be affected by co-lateralization within the same system and influence between different systems, such as neural competition from reading acquisition. Yet, how the relationship pattern changes through development remains unknown. This study examined the lateralization of core face processing and word processing in different age groups. By comparing fMRI data from 36 school-aged children and 40 young adults, we investigated whether there are age and regional effects on lateralization, and how relationships between lateralization within and between systems change across development. Our results showed significant right hemispheric lateralization in the core face system and left hemispheric lateralization in reading-related areas for both age groups when viewing faces and texts passively. While all participants showed stronger lateralization in brain regions of higher functional hierarchy when viewing faces, only adults exhibited this lateralization when viewing texts. In both age cohorts, there was intra-system co-lateralization for face processing, whereas an inter-system relationship was only found in adults. Specifically, functional lateralization of Broca's area during reading negatively predicted functional asymmetry in the FFA during face perception. This study initially provides neuroimaging evidence for the reading-induced neural competition theory from a maturational perspective in Chinese cohorts.
Collapse
Affiliation(s)
- Xinyang Liu
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.
| | - Danni He
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Miaomiao Zhu
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Yinghui Li
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Longnian Lin
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; Shanghai Center for Brain Science and Brain-Inspired Technology, East China Normal University, China; NYU-ECNU Institute of Brain and Cognitive Science, New York University, Shanghai, China; School of Life Science Department, East China Normal University, Shanghai 200062, China.
| | - Qing Cai
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center (ECNU), Institute of Brain and Education Innovation, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; Shanghai Changning Mental Health Center, Shanghai 200335, China; Shanghai Center for Brain Science and Brain-Inspired Technology, East China Normal University, China; NYU-ECNU Institute of Brain and Cognitive Science, New York University, Shanghai, China.
| |
Collapse
|
6
|
Ji L, Chen Z, Zeng X, Sun B, Fu S. Automatic processing of unattended mean emotion: Evidence from visual mismatch responses. Neuropsychologia 2024; 202:108963. [PMID: 39069120 DOI: 10.1016/j.neuropsychologia.2024.108963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 06/12/2024] [Accepted: 07/26/2024] [Indexed: 07/30/2024]
Abstract
The mean emotion from multiple facial expressions can be extracted rapidly and precisely. However, it remains debated whether mean emotion processing is automatic which can occur under no attention. To address this question, we used a passive oddball paradigm and recorded event-related brain potentials when participants discriminated the changes in the central fixation while a set of four faces was presented in the periphery. The face set consisted of one happy and three angry expressions (mean negative) or one angry and three happy expressions (mean positive), and the mean negative and mean positive face sets were shown with a probability of 20% (deviant) and 80% (standard) respectively in the sequence, or the vice versa. The cluster-based permutation analyses showed that the visual mismatch negativity started early at around 92 ms and was also observed in later time windows when the mean emotion was negative, while a mismatch positivity was observed at around 168-266 ms when the mean emotion was positive. The results suggest that there might be different mechanisms underlying the processing of mean negative and mean positive emotions. More importantly, the brain can detect the changes in the mean emotion automatically, and ensemble coding for multiple facial expressions can occur in an automatic fashion without attention.
Collapse
Affiliation(s)
- Luyan Ji
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China.
| | - Zilong Chen
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Xianqing Zeng
- School of Psychology, South China Normal University, Guangzhou, China
| | - Bo Sun
- Institute of Psychology and Behavior, Henan University, Kaifeng, China
| | - Shimin Fu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| |
Collapse
|
7
|
Yeung MK. Effects of age on the interactions of attentional and emotional processes: a prefrontal fNIRS study. Cogn Emot 2024; 38:549-564. [PMID: 38303643 DOI: 10.1080/02699931.2024.2311799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 01/22/2024] [Indexed: 02/03/2024]
Abstract
The aging of attentional and emotional functions has been extensively studied but relatively independently. Therefore, the relationships between aging and the interactions of attentional and emotional processes remain elusive. This study aimed to determine how age affected the interactions between attentional and emotional processes during adulthood. One-hundred forty adults aged 18-79 performed the emotional variant of the Attention Network Test, which probed alerting, orienting, and executive control in the presence and absence of threatening faces. During this task, contexts with varying levels of task preparatory processes were created to modulate the effect of threatening faces on attention, and functional near-infrared spectroscopy (fNIRS) was used to examine the neural underpinnings of the behavioural effects. The behavioural results showed that aging was associated with a significant decline in alerting efficiency, and there was a statistical trend for age-related deficits in executive control. Despite these age differences, age did not significantly moderate the interactions among attentional networks or between attention and emotion. Additionally, the fNIRS results showed that decreased frontal cortex functioning might underlie the age-related decline in executive control. Therefore, while aging has varying effects on different attentional networks, the interactions of attentional and emotional processes remain relatively unaffected by age.
Collapse
Affiliation(s)
- Michael K Yeung
- Department of Psychology, The Education University of Hong Kong, Hong Kong, People's Republic of China
- University Research Facility in Behavioral and Systems Neuroscience, The Hong Kong Polytechnic University, Hong Kong, People's Republic of China
| |
Collapse
|
8
|
Huang W, Xu W, Wan R, Zhang P, Zha Y, Pang M. Auto Diagnosis of Parkinson's Disease Via a Deep Learning Model Based on Mixed Emotional Facial Expressions. IEEE J Biomed Health Inform 2024; 28:2547-2557. [PMID: 37022035 DOI: 10.1109/jbhi.2023.3239780] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Parkinson's disease (PD) is a common degenerative disease of the nervous system in the elderly. The early diagnosis of PD is very important for potential patients to receive prompt treatment and avoid the aggravation of the disease. Recent studies have found that PD patients always suffer from emotional expression disorder, thus forming the characteristics of "masked faces". Based on this, we thus propose an auto PD diagnosis method based on mixed emotional facial expressions in the paper. Specifically, the proposed method is cast into four steps: Firstly, we synthesize virtual face images containing six basic expressions (i.e., anger, disgust, fear, happiness, sadness, and surprise) via generative adversarial learning, in order to approximate the premorbid expressions of PD patients; Secondly, we design an effective screening scheme to assess the quality of the above synthesized facial expression images and then shortlist the high-quality ones; Thirdly, we train a deep feature extractor accompanied with a facial expression classifier based on the mixture of the original facial expression images of the PD patients, the high-quality synthesized facial expression images of PD patients, and the normal facial expression images from other public face datasets; Finally, with the well-trained deep feature extractor, we thus adopt it to extract the latent expression features for six facial expression images of a potential PD patient to conduct PD/non-PD prediction. To show real-world impacts, we also collected a new facial expression dataset of PD patients in collaboration with a hospital. Extensive experiments are conducted to validate the effectiveness of the proposed method for PD diagnosis and facial expression recognition.
Collapse
|
9
|
Yeung MK, Wan JCH, Chan MMK, Cheung SHY, Sze SCY, Siu WWY. Motivation and emotional distraction interact and affect executive functions. BMC Psychol 2024; 12:188. [PMID: 38581067 PMCID: PMC10998358 DOI: 10.1186/s40359-024-01695-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 03/28/2024] [Indexed: 04/07/2024] Open
Abstract
Previous research on cool-hot executive function (EF) interactions has examined the effects of motivation and emotional distraction on cool EF separately, focusing on one EF component at a time. Although both incentives and emotional distractors have been shown to modulate attention, how they interact and affect cool EF processes is still unclear. Here, we used an experimental paradigm that manipulated updating, inhibition, and shifting demands to determine the interactions of motivation and emotional distraction in the context of cool EF. Forty-five young adults (16 males, 29 females) completed the go/no-go (inhibition), two-back (updating), and task-switching (shifting) tasks. Monetary incentives were implemented to manipulate motivation, and task-irrelevant threatening or neutral faces were presented before the target stimulus to manipulate emotional distraction. We found that incentives significantly improved no-go accuracy, two-back accuracy, and reaction time (RT) switch cost. While emotional distractors had no significant effects on overall task performance, they abolished the incentive effects on no-go accuracy and RT switch cost. Altogether, these findings suggest that motivation and emotional distraction interact in the context of cool EF. Specifically, transient emotional distraction disrupts the upregulation of control activated by incentives. The present investigation has advanced knowledge about the relationship between cool and hot EF and highlights the importance of considering motivation-emotion interactions for a fuller understanding of control.
Collapse
Affiliation(s)
- Michael K Yeung
- Department of Psychology, The Education University of Hong Kong, Tai Po, Hong Kong, China.
| | - Jaden Cheuk-Hei Wan
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| | - Michelle Mei-Ka Chan
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| | - Sam Ho-Yu Cheung
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| | - Steven Chun-Yui Sze
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| | - Winnie Wing-Yi Siu
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| |
Collapse
|
10
|
Aly M, Sakamoto M, Kamijo K. Grip strength, working memory, and emotion perception in middle-aged males. PROGRESS IN BRAIN RESEARCH 2024; 286:89-105. [PMID: 38876580 DOI: 10.1016/bs.pbr.2023.12.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2024]
Abstract
This study examined the association between grip strength and emotional working memory in middle-aged adults. Seventy-six males aged 40-60years (mean=48.5years, SD=5.4) participated in this cross-sectional study. They completed a muscular fitness assessment using a maximum grip strength test and emotional n-back tasks under two emotion conditions (fearful and neutral facial pictures) and two working memory loads (1-back and 2-back tasks). Hierarchical regression analyses indicated that greater muscular fitness was associated with superior working memory performance in the fearful condition in both the 1-back and 2-back tasks, after controlling for confounders. Greater muscular fitness was also associated with superior working memory performance in the neutral condition when the working memory load was high (2-back task) but not low (1-back task). These findings suggest a positive association between muscular fitness and emotional working memory and highlight the importance of maintaining muscular fitness for physical and cognitive-emotional well-being in middle-aged adults.
Collapse
Affiliation(s)
- Mohamed Aly
- Faculty of Liberal Arts and Sciences, Chukyo University, Nagoya, Japan; Department of Educational Sciences and Sports Psychology, Faculty of Physical Education, Assiut University, Assiut, Egypt
| | - Masanori Sakamoto
- Department of Physical Education, Faculty of Education, Kumamoto University, Kumamoto, Japan
| | - Keita Kamijo
- Faculty of Liberal Arts and Sciences, Chukyo University, Nagoya, Japan.
| |
Collapse
|
11
|
Zhang Z, Peng Y, Jiang Y, Chen T. The pictorial set of Emotional Social Interactive Scenarios between Chinese Adults (ESISCA): Development and validation. Behav Res Methods 2024; 56:2581-2594. [PMID: 37528294 DOI: 10.3758/s13428-023-02168-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2023] [Indexed: 08/03/2023]
Abstract
Affective picture databases with a single facial expression or body posture in one image have been widely applied to investigate emotion. However, to date, there was no standardized database containing the stimuli which involve multiple emotional signals in social interactive scenarios. The current study thus developed a pictorial set comprising 274 images depicting two Chinese adults' interactive scenarios conveying emotions of happiness, anger, sadness, fear, disgust, and neutral. The data of the valence and arousal ratings of the scenes and the emotional categories of the scenes and the faces in the images were provided in the present study. Analyses of the data collected from 70 undergraduate students suggested high reliabilities of the valence and arousal ratings of the scenes and high judgmental agreements in categorizing the scene and facial emotions. The findings suggested that the present dataset is well constructed and could be useful for future studies to investigate the emotion recognition or empathy in social interactions in both healthy and clinical (e.g., ASD) populations.
Collapse
Affiliation(s)
- Ziyu Zhang
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, 215123, China
| | - Yanqin Peng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, 215123, China
| | - Yiyao Jiang
- College of Arts and Sciences, Syracuse University, Syracuse, NY, 13244, USA
| | - Tingji Chen
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, 215123, China.
| |
Collapse
|
12
|
Liu Y, Ji L. Ensemble coding of multiple facial expressions is not affected by attentional load. BMC Psychol 2024; 12:102. [PMID: 38414021 PMCID: PMC10900713 DOI: 10.1186/s40359-024-01598-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 02/16/2024] [Indexed: 02/29/2024] Open
Abstract
Human observers can extract the mean emotion from multiple faces rapidly and precisely. However, whether attention is required in the ensemble coding of facial expressions remains debated. In this study, we examined the effect of attentional load on mean emotion processing with the dual-task paradigm. Individual emotion processing was also investigated as the control task. In the experiment, the letter string and a set of four happy or angry faces of various emotional intensities were shown. Participants had to complete the string task first, judging either the string color (low attention load) or the presence of the target letter (high attention load). Then a cue appeared indicating whether the secondary task was to evaluate the mean emotion of the faces or the emotion of the cued single face, and participants made their judgments on the visual analog scale. The results showed that compared with the color task, the letter task had a longer response time and lower accuracy, which verified the valid manipulation of the attention loads. More importantly, there was no significant difference in averaging performance between the low and high attention loads. By contrast, the individual face processing was impaired under the high attention load relative to the low attentional load. In addition, the advantage of extracting mean emotion over individual emotion was larger under the high attentional load. These results support the power of averaging and provide new evidence that a rather small amount of attention is needed in the ensemble coding of multiple facial expressions.
Collapse
Affiliation(s)
- Yujuan Liu
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 510006, Guangzhou, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macao, China
| | - Luyan Ji
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, 510006, Guangzhou, China.
| |
Collapse
|
13
|
Han S, Guo Y, Zhou X, Huang J, Shen L, Luo Y. A Chinese Face Dataset with Dynamic Expressions and Diverse Ages Synthesized by Deep Learning. Sci Data 2023; 10:878. [PMID: 38062057 PMCID: PMC10703811 DOI: 10.1038/s41597-023-02701-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 10/31/2023] [Indexed: 12/18/2023] Open
Abstract
Facial stimuli have gained increasing popularity in research. However, the existing Chinese facial datasets primarily consist of static facial expressions and lack variations in terms of facial aging. Additionally, these datasets are limited to stimuli from a small number of individuals, in that it is difficult and time-consuming to recruit a diverse range of volunteers across different age groups to capture their facial expressions. In this paper, a deep-learning based face editing approach, StyleGAN, is used to synthesize a Chinese face dataset, namely SZU-EmoDage, where faces with different expressions and ages are synthesized. Leverage on the interpolations of latent vectors, continuously dynamic expressions with different intensities, are also available. Participants assessed emotional categories and dimensions (valence, arousal and dominance) of the synthesized faces. The results show that the face database has good reliability and validity, and can be used in relevant psychological experiments. The availability of SZU-EmoDage opens up avenues for further research in psychology and related fields, allowing for a deeper understanding of facial perception.
Collapse
Affiliation(s)
- Shangfeng Han
- School of psychology, Magnetic Resonance Imaging Center, China-UK Visual Information Processing Laboratory, Institute of Computer Vision, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
- Department of Psychology and Center for Brain and Cognitive Sciences, School of Education, Guangzhou University, Guangzhou, China
| | - Yanliang Guo
- School of psychology, Magnetic Resonance Imaging Center, China-UK Visual Information Processing Laboratory, Institute of Computer Vision, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
| | - Xinyi Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Junlong Huang
- School of Psychology, Sichuan Center of Applied Psychology, Chengdu Medical College, Chengdu, China
| | - Linlin Shen
- School of psychology, Magnetic Resonance Imaging Center, China-UK Visual Information Processing Laboratory, Institute of Computer Vision, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China.
| | - Yuejia Luo
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.
- School of Psychology, Sichuan Center of Applied Psychology, Chengdu Medical College, Chengdu, China.
- Institute for Neuropsychological Rehabilitation, University of Health and Rehabilitation Sciences, Qingdao, China.
| |
Collapse
|
14
|
Wu D, Gao C, Li BM, Jia X. Negative emotion amplifies retrieval practice effect for both task-relevant and task-irrelevant information. Cogn Emot 2023; 37:1199-1212. [PMID: 37697968 DOI: 10.1080/02699931.2023.2255334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 08/08/2023] [Indexed: 09/13/2023]
Abstract
Selective retrieval of task-relevant information often facilitates memory retention of that information. However, it is still unclear if selective retrieval of task-relevant information can alter memory for task-irrelevant information, and the role of emotional arousal in it. In two experiments, we used emotional and neutral faces as stimuli, and participants were asked to memorise the name (who is this person?) and location (where does he/she come from?) associated with each face in initial study. Then, half of the studied faces were presented as cues, and participants were asked to retrieve the corresponding names (Experiment 1) or locations (Experiment 2). Finally, all the faces were presented and participants were asked to retrieve both the corresponding names and locations. The results of the final test showed that retrieval practice not only enhanced memory of task-relevant information but also enhanced memory of task-irrelevant information. More importantly, negative emotion amplified the retrieval practice effect overall, with a larger retrieval-induced benefit for the negative than neutral condition. These findings demonstrated an emotional arousal amplification effect on retrieval-induced enhancement effects, suggesting that the advantage of the retrieved memory representations can be amplified by emotional arousal even without explicit goals in a task setting.
Collapse
Affiliation(s)
- Di Wu
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, People's Republic of China
| | - Chuanji Gao
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Bao-Ming Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, People's Republic of China
| | - Xi Jia
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, People's Republic of China
| |
Collapse
|
15
|
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023; 55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
Abstract
Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Collapse
Affiliation(s)
| | | | - İlker Duymaz
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey
| | - Bilge Sayim
- SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France
- Institute of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland
| | - Nihan Alp
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.
| |
Collapse
|
16
|
Verma R, Kalsi N, Shrivastava NP, Sheerha A. Development and Validation of the AIIMS Facial Toolbox for Emotion Recognition. Indian J Psychol Med 2023; 45:471-475. [PMID: 37772150 PMCID: PMC10523516 DOI: 10.1177/02537176221111578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/30/2023] Open
Abstract
Background Emotional facial expression database, used in emotion regulation studies, is a special set of pictures with high social and biological relevance. We present the AIIMS Facial Toolbox for Emotion Recognition (AFTER) database. It consists of pictures of 15 adult professional artists displaying seven facial expressions-neutral, happiness, anger, sadness, disgust, fear, and surprise. Methods This cross-sectional study enrolled 15 volunteer students from a professional drama college in India (six males and nine females; mean age = 26.2 ± 1.93 years). They were instructed to pose with different emotional expressions in high and low intensity. A total of 240 pictures were captured in a brightly lit room against a common, light background. Each picture was validated independently by 19 mental health professionals and two professional teachers of dramatic art. Apart from recognition of emotional quality, ratings were done for each emotion on a 5-point Likert scale with respect to three dimensions-intensity, clarity, and genuineness. Results are discussed in terms of mean scores on all four parameters. Results The percentage hit rate for all the emotions, after exclusion of contempt, was 84.3%, with the mean kappa for emotional expression being 0.68. Mean scores on intensity, clarity, and genuineness of the emotions depicted in the pictures were high. Conclusions The database would be useful in the Indian context for researching facial emotion recognition. It has been validated among a group of experts and was found to have high inter-rater reliability.
Collapse
Affiliation(s)
- Rohit Verma
- Brain Mapping Lab, Dept. of
Psychiatry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Navkiran Kalsi
- Brain Mapping Lab, Dept. of
Psychiatry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Neha Priya Shrivastava
- Brain Mapping Lab, Dept. of
Psychiatry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| | - Anita Sheerha
- Brain Mapping Lab, Dept. of
Psychiatry, All India Institute of Medical Sciences (AIIMS), New Delhi, India
| |
Collapse
|
17
|
Hu X, Tong S. Effects of Robot Animacy and Emotional Expressions on Perspective-Taking Abilities: A Comparative Study across Age Groups. Behav Sci (Basel) 2023; 13:728. [PMID: 37754006 PMCID: PMC10525100 DOI: 10.3390/bs13090728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 08/24/2023] [Accepted: 08/28/2023] [Indexed: 09/28/2023] Open
Abstract
The global population is inevitably aging due to increased life expectancy and declining birth rates, leading to an amplified demand for innovative social and healthcare services. One promising avenue is the introduction of companion robots. These robots are designed to provide physical assistance as well as emotional support and companionship, necessitating effective human-robot interaction (HRI). This study explores the role of cognitive empathy within HRI, focusing on the influence of robot facial animacy and emotional expressions on perspective-taking abilities-a key aspect of cognitive empathy-across different age groups. To this end, a director task involving 60 participants (30 young and 30 older adults) with varying degrees of robot facial animacy (0%, 50%, 100%) and emotional expressions (happy, neutral) was conducted. The results revealed that older adults displayed enhanced perspective-taking with higher animacy faces. Interestingly, while happiness on high-animacy faces improved perspective-taking, the same expression on low-animacy faces reduced it. These findings highlight the importance of considering facial animacy and emotional expressions in designing companion robots for older adults to optimize user engagement and acceptance. The study's implications are pertinent to the design and development of socially effective service robots, particularly for the aging population.
Collapse
Affiliation(s)
- Xucong Hu
- Faculty of Psychology, Southwest University, Chongqing 400715, China;
| | - Song Tong
- Department of Psychology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
18
|
Yeung MK. The prefrontal cortex is differentially involved in implicit and explicit facial emotion processing: An fNIRS study. Biol Psychol 2023; 181:108619. [PMID: 37336356 DOI: 10.1016/j.biopsycho.2023.108619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 06/21/2023]
Abstract
Despite extensive research, the differential roles of the prefrontal cortex (PFC) in implicit and explicit facial emotion processing remain elusive. Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique that can measure changes in both oxyhemoglobin (HbO) and deoxyhemoglobin (HbR) concentrations. Currently, how HbO and HbR change during facial emotion processing remains unclear. Here, fNIRS was used to examine and compare PFC activation during implicit and explicit facial emotion processing. Forty young adults performed a facial-matching task that required either emotion discrimination (explicit task) or age discrimination (implicit task), and the activation of their PFCs was measured by fNIRS. Participants attempted the task on two occasions to determine whether their activation patterns were maintained over time. The PFC displayed increases in HbO and/or decreases in HbR during the implicit and explicit facial emotion tasks. Importantly, there were significantly greater changes in PFC HbO during the explicit task, whereas no significant difference in HbR changes between conditions was found. Between sessions, HbO changes were reduced across tasks, but the difference in HbO changes between the implicit and explicit tasks remained unchanged. The test-retest reliability of the behavioral measures was excellent, whereas that of fNIRS measures was mostly poor to fair. Thus, the PFC plays a specific role in recognizing facial expressions, and its differential involvement in implicit and explicit facial emotion processing can be consistently captured at the group level by changes in HbO. This study demonstrates the potential of fNIRS for elucidating the neural mechanisms underlying facial emotion recognition.
Collapse
Affiliation(s)
- Michael K Yeung
- Department of Psychology, The Education University of Hong Kong, Hong Kong, China; University Research Facility in Behavioral and Systems Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
19
|
Chantrapornchai C, Kajkamhaeng S, Romphet P. Micro-architecture design exploration template for AutoML case study on SqueezeSEMAuto. Sci Rep 2023; 13:10642. [PMID: 37391458 PMCID: PMC10313661 DOI: 10.1038/s41598-023-37682-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 06/26/2023] [Indexed: 07/02/2023] Open
Abstract
Convolutional Neural Network (CNN) models have been commonly used primarily in image recognition tasks in the deep learning area. Finding the right architecture needs a lot of hand-tune experiments which are time-consuming. In this paper, we exploit an AutoML framework that adds to the exploration of the micro-architecture block and the multi-input option. The proposed adaption has been applied to SqueezeNet with SE blocks combined with the residual block combinations. The experiments assume three search strategies: Random, Hyperband, and Bayesian algorithms. Such combinations can lead to solutions with superior accuracy while the model size can be monitored. We demonstrate the application of the approach against benchmarks: CIFAR-10 and Tsinghua Facial Expression datasets. The searches allow the designer to find the architectures with better accuracy than the traditional architectures without hand-tune efforts. For example, CIFAR-10, leads to the SqueezeNet architecture using only 4 fire modules with 59% accuracy. When exploring SE block insertion, the model with good insertion points can lead to an accuracy of 78% while the traditional SqueezeNet can achieve an accuracy of around 50%. For other tasks, such as facial expression recognition, the proposed approach can lead up to an accuracy of 71% with the proper insertion of SE blocks, the appropriate number of fire modules, and adequate input merging, while the traditional model can achieve the accuracy under 20%.
Collapse
Affiliation(s)
- Chantana Chantrapornchai
- Department of Computer Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand.
| | - Supasit Kajkamhaeng
- Department of Computer Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Phattharaphon Romphet
- Department of Computer Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| |
Collapse
|
20
|
Zhang L, Chen Y, Wei Y, Leng J, Kong C, Hu P. Kick Cat Effect: Social Context Shapes the Form and Extent of Emotional Contagion. Behav Sci (Basel) 2023; 13:531. [PMID: 37503978 PMCID: PMC10376848 DOI: 10.3390/bs13070531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 06/08/2023] [Accepted: 06/20/2023] [Indexed: 07/29/2023] Open
Abstract
Emotional contagion refers to the transmission and interaction of emotions among people. Researchers have mainly focused on its process and mechanism, often simplifying its social background due to its complexity. Therefore, in this study, we attempt to explore whether the presence and clarity of social context affect emotional contagion and the related neural mechanisms. In Experiment 1, participants were asked to report their subjective experiences after being exposed to the facial expressions of emotional expressers, with or without the corresponding social context being presented. The results revealed that positive or negative expressions from the expressers elicited corresponding emotional experiences in the receivers, regardless of the presence of social context. However, when the social context was absent, the degree of emotional contagion was greater. In Experiment 2, we further investigated the effect of the clarity of social contexts on emotional contagion and its neural mechanisms. The results showed an effect consistent with those in Experiment 1 and highlighted the special role of N1, N2, P3, and LPP components in this process. According to the emotions as social information theory, individuals may rely more on social appraisal when they lack sufficient contextual information. By referencing the expressions of others and maintaining emotional convergence with them, individuals can adapt more appropriately to their current environment.
Collapse
Affiliation(s)
- Ling Zhang
- Department of Psychology, Renmin University of China, No. 59 of Zhongguancun Street, Haidian District, Beijing 100872, China
| | - Ying Chen
- Department of Psychology, Renmin University of China, No. 59 of Zhongguancun Street, Haidian District, Beijing 100872, China
| | - Yanqiu Wei
- Department of Psychology, Renmin University of China, No. 59 of Zhongguancun Street, Haidian District, Beijing 100872, China
| | - Jie Leng
- Department of Psychology, Renmin University of China, No. 59 of Zhongguancun Street, Haidian District, Beijing 100872, China
| | - Chao Kong
- Department of Psychology, Renmin University of China, No. 59 of Zhongguancun Street, Haidian District, Beijing 100872, China
| | - Ping Hu
- Department of Psychology, Renmin University of China, No. 59 of Zhongguancun Street, Haidian District, Beijing 100872, China
| |
Collapse
|
21
|
Lukac M, Zhambulova G, Abdiyeva K, Lewis M. Study on emotion recognition bias in different regional groups. Sci Rep 2023; 13:8414. [PMID: 37225756 PMCID: PMC10209154 DOI: 10.1038/s41598-023-34932-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/10/2023] [Indexed: 05/26/2023] Open
Abstract
Human-machine communication can be substantially enhanced by the inclusion of high-quality real-time recognition of spontaneous human emotional expressions. However, successful recognition of such expressions can be negatively impacted by factors such as sudden variations of lighting, or intentional obfuscation. Reliable recognition can be more substantively impeded due to the observation that the presentation and meaning of emotional expressions can vary significantly based on the culture of the expressor and the environment within which the emotions are expressed. As an example, an emotion recognition model trained on a regionally-specific database collected from North America might fail to recognize standard emotional expressions from another region, such as East Asia. To address the problem of regional and cultural bias in emotion recognition from facial expressions, we propose a meta-model that fuses multiple emotional cues and features. The proposed approach integrates image features, action level units, micro-expressions and macro-expressions into a multi-cues emotion model (MCAM). Each of the facial attributes incorporated into the model represents a specific category: fine-grained content-independent features, facial muscle movements, short-term facial expressions and high-level facial expressions. The results of the proposed meta-classifier (MCAM) approach show that a) the successful classification of regional facial expressions is based on non-sympathetic features b) learning the emotional facial expressions of some regional groups can confound the successful recognition of emotional expressions of other regional groups unless it is done from scratch and c) the identification of certain facial cues and features of the data-sets that serve to preclude the design of the perfect unbiased classifier. As a result of these observations we posit that to learn certain regional emotional expressions, other regional expressions first have to be "forgotten".
Collapse
Affiliation(s)
- Martin Lukac
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan.
| | - Gulnaz Zhambulova
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan
| | - Kamila Abdiyeva
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan
| | - Michael Lewis
- Department of Computer Science, Nazarbayev University, Kabanbay Batyr 53, Astana, 010000, Kazakhstan
| |
Collapse
|
22
|
Yeung MK. Context-specific effects of threatening faces on alerting, orienting, and executive control: A fNIRS study. Heliyon 2023; 9:e15995. [PMID: 37206041 PMCID: PMC10189190 DOI: 10.1016/j.heliyon.2023.e15995] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 04/20/2023] [Accepted: 04/28/2023] [Indexed: 05/21/2023] Open
Abstract
Real-world threatening faces possess both useful and irrelevant attributes with respect to the current goal. How these attributes interact and affect attention, which comprises at least three processes hypothesized to engage the frontal lobes (alerting, orienting, and executive control), remains poorly understood. Here, the neurocognitive effects of threatening facial expressions on the three processes of attention were examined through the emotional Attention Network Test (ANT) and functional near-infrared spectroscopy (fNIRS). Forty-seven (20M, 27F) young adults performed a blocked version of the arrow flanker task with neutral and angry facial cues applied in three cue conditions (no, center, and spatial). Hemodynamic changes occurring in participants' frontal cortices during task performance were recorded by multichannel fNIRS. Behavioral results indicated that alerting, orienting, and executive control processes existed in both the neutral and angry conditions. However, depending on the context, angry facial cues affected these processes differently compared with neutral facial cues. Specifically, the angry face disrupted the classical decrease in reaction time from the no-cue to center-cue condition specifically during the congruent condition. Additionally, fNIRS results revealed significant frontal cortical activation during the incongruent vs. congruent task; neither cue nor emotion significantly affected frontal activation. Thus, the findings suggest that the angry face affects all three attentional processes while exerting context-specific effects on attention. They also imply that during the ANT, the frontal cortex is most involved in executive control. The present study offers essential insights into how various attributes of threatening faces interact and alter attention.
Collapse
Affiliation(s)
- Michael K. Yeung
- Department of Psychology, The Education University of Hong Kong, Hong Kong, China
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China
- University Research Facility in Behavioral and Systems Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
23
|
Long H, Peluso N, Baker CI, Japee S, Taubert J. A database of heterogeneous faces for studying naturalistic expressions. Sci Rep 2023; 13:5383. [PMID: 37012369 PMCID: PMC10070342 DOI: 10.1038/s41598-023-32659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.
Collapse
Affiliation(s)
- Houqiu Long
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Natalie Peluso
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Shruti Japee
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Jessica Taubert
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia.
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.
| |
Collapse
|
24
|
Banerjee A, Mutlu OC, Kline A, Surabhi S, Washington P, Wall DP. Training and Profiling a Pediatric Facial Expression Classifier for Children on Mobile Devices: Machine Learning Study. JMIR Form Res 2023; 7:e39917. [PMID: 35962462 PMCID: PMC10131663 DOI: 10.2196/39917] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 08/01/2022] [Accepted: 08/09/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Implementing automated facial expression recognition on mobile devices could provide an accessible diagnostic and therapeutic tool for those who struggle to recognize facial expressions, including children with developmental behavioral conditions such as autism. Despite recent advances in facial expression classifiers for children, existing models are too computationally expensive for smartphone use. OBJECTIVE We explored several state-of-the-art facial expression classifiers designed for mobile devices, used posttraining optimization techniques for both classification performance and efficiency on a Motorola Moto G6 phone, evaluated the importance of training our classifiers on children versus adults, and evaluated the models' performance against different ethnic groups. METHODS We collected images from 12 public data sets and used video frames crowdsourced from the GuessWhat app to train our classifiers. All images were annotated for 7 expressions: neutral, fear, happiness, sadness, surprise, anger, and disgust. We tested 3 copies for each of 5 different convolutional neural network architectures: MobileNetV3-Small 1.0x, MobileNetV2 1.0x, EfficientNetB0, MobileNetV3-Large 1.0x, and NASNetMobile. We trained the first copy on images of children, second copy on images of adults, and third copy on all data sets. We evaluated each model against the entire Child Affective Facial Expression (CAFE) set and by ethnicity. We performed weight pruning, weight clustering, and quantize-aware training when possible and profiled each model's performance on the Moto G6. RESULTS Our best model, a MobileNetV3-Large network pretrained on ImageNet, achieved 65.78% accuracy and 65.31% F1-score on the CAFE and a 90-millisecond inference latency on a Moto G6 phone when trained on all data. This accuracy is only 1.12% lower than the current state of the art for CAFE, a model with 13.91x more parameters that was unable to run on the Moto G6 due to its size, even when fully optimized. When trained solely on children, this model achieved 60.57% accuracy and 60.29% F1-score. When trained only on adults, the model received 53.36% accuracy and 53.10% F1-score. Although the MobileNetV3-Large trained on all data sets achieved nearly a 60% F1-score across all ethnicities, the data sets for South Asian and African American children achieved lower accuracy (as much as 11.56%) and F1-score (as much as 11.25%) than other groups. CONCLUSIONS With specialized design and optimization techniques, facial expression classifiers can become lightweight enough to run on mobile devices and achieve state-of-the-art performance. There is potentially a "data shift" phenomenon between facial expressions of children compared with adults; our classifiers performed much better when trained on children. Certain underrepresented ethnic groups (e.g., South Asian and African American) also perform significantly worse than groups such as European Caucasian despite similar data quality. Our models can be integrated into mobile health therapies to help diagnose autism spectrum disorder and provide targeted therapeutic treatment to children.
Collapse
Affiliation(s)
- Agnik Banerjee
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
| | - Onur Cezmi Mutlu
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
| | - Saimourya Surabhi
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
| | - Peter Washington
- Department of Information and Computer Sciences, University of Hawai`i at Mānoa, Honolulu, HI, United States
| | - Dennis Paul Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
25
|
Li MG, Olsen KN, Davidson JW, Thompson WF. Rich Intercultural Music Engagement Enhances Cultural Understanding: The Impact of Learning a Musical Instrument Outside of One's Lived Experience. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1919. [PMID: 36767286 PMCID: PMC9914662 DOI: 10.3390/ijerph20031919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/16/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Rich intercultural music engagement (RIME) is an embodied form of engagement whereby individuals immerse themselves in foreign musical practice, for example, by learning a traditional instrument from that culture. The present investigation evaluated whether RIME with Chinese or Middle Eastern music can nurture intercultural understanding. White Australian participants were randomly assigned to one of two plucked-string groups: Chinese pipa (n = 29) or Middle Eastern oud (n = 29). Before and after the RIME intervention, participants completed measures of ethnocultural empathy, tolerance, social connectedness, explicit and implicit attitudes towards ethnocultural groups, and open-ended questions about their experience. Following RIME, White Australian participants reported a significant increase in ethnocultural empathy, tolerance, feelings of social connection, and improved explicit and implicit attitudes towards Chinese and Middle Eastern people. However, these benefits differed between groups. Participants who learned Chinese pipa reported reduced bias and increased social connectedness towards Chinese people, but not towards Middle Eastern people. Conversely, participants who learned Middle Eastern oud reported a significant increase in social connectedness towards Middle Eastern people, but not towards Chinese people. This is the first experimental evidence that participatory RIME is an effective tool for understanding a culture other than one's own, with the added potential to reduce cultural bias.
Collapse
Affiliation(s)
- Marjorie G. Li
- School of Psychological Sciences, Macquarie University, Macquarie Park, NSW 2109, Australia
| | - Kirk N. Olsen
- School of Psychological Sciences, Macquarie University, Macquarie Park, NSW 2109, Australia
- Australian Institute of Health Innovation, Macquarie University, Macquarie Park, NSW 2109, Australia
| | - Jane W. Davidson
- Faculty of Fine Arts and Music, University of Melbourne, Southbank, VIC 3006, Australia
| | - William Forde Thompson
- School of Psychological Sciences, Macquarie University, Macquarie Park, NSW 2109, Australia
- Faculty of Society and Design, Bond University, Robina, QLD 4226, Australia
| |
Collapse
|
26
|
Heydari F, Sheybani S, Yoonessi A. Iranian emotional face database: Acquisition and validation of a stimulus set of basic facial expressions. Behav Res Methods 2023; 55:143-150. [PMID: 35297015 DOI: 10.3758/s13428-022-01812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions play an essential role in social interactions. Databases of face images have furnished theories of emotion perception, as well as having applications in other disciplines such as facial recognition technology. However, the faces of many ethnicities remain largely underrepresented in the existing face databases, which can impact the generalizability of the theories and technologies developed based on them. Here, we present the first survey-validated database of Iranian faces. It consists of 248 images from 40 Iranian individuals portraying six emotional expressions-anger, sadness, fear, disgust, happiness, and surprise-as well as the neutral state. The photos were taken in a studio setting, following the common scenarios of emotion induction, and controlling for conditions of lighting, camera setup, and the model's head posture. An evaluation survey confirmed high agreement between the models' intended expressions and the raters' perception of them. The database is freely available online for academic research purposes.
Collapse
Affiliation(s)
- Faeze Heydari
- Institute for Cognitive Science Studies, Tehran, Iran.
| | | | - Ali Yoonessi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
27
|
Fabrício DDM, Ferreira BLC, Maximiano-Barreto MA, Muniz M, Chagas MHN. Construction of face databases for tasks to recognize facial expressions of basic emotions: a systematic review. Dement Neuropsychol 2022; 16:388-410. [DOI: 10.1590/1980-5764-dn-2022-0039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 08/01/2022] [Accepted: 08/23/2022] [Indexed: 12/12/2022] Open
Abstract
ABSTRACT. Recognizing the other's emotions is an important skill for the social context that can be modulated by variables such as gender, age, and race. A number of studies seek to elaborate specific face databases to assess the recognition of basic emotions in different contexts. Objectives: This systematic review sought to gather these studies, describing and comparing the methodologies used in their elaboration. Methods: The databases used to select the articles were the following: PubMed, Web of Science, PsycInfo, and Scopus. The following word crossing was used: “Facial expression database OR Stimulus set AND development OR Validation.” Results: A total of 36 articles showed that most of the studies used actors to express the emotions that were elicited from specific situations to generate the most spontaneous emotion possible. The databases were mainly composed of colorful and static stimuli. In addition, most of the studies sought to establish and describe patterns to record the stimuli, such as color of the garments used and background. The psychometric properties of the databases are also described. Conclusions: The data presented in this review point to the methodological heterogeneity among the studies. Nevertheless, we describe their patterns, contributing to the planning of new research studies that seek to create databases for new contexts.
Collapse
Affiliation(s)
| | | | | | - Monalisa Muniz
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil
| | - Marcos Hortes Nisihara Chagas
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil; Universidade de São Paulo, Brazil; Instituto Bairral de Psiquiatria, Brazil
| |
Collapse
|
28
|
Li S, Guo L, Liu J. Towards East Asian Facial Expression Recognition in the Real World: A New Database and Deep Recognition Baseline. SENSORS (BASEL, SWITZERLAND) 2022; 22:8089. [PMID: 36365786 PMCID: PMC9658752 DOI: 10.3390/s22218089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 10/15/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the focus of facial expression recognition (FER) has gradually shifted from laboratory settings to challenging natural scenes. This requires a great deal of real-world facial expression data. However, most existing real-world databases are based on European-American cultures, and only one is for Asian cultures. This is mainly because the data on European-American expressions are more readily accessed and publicly available online. Owing to the diversity of huge data, FER in European-American cultures has recently developed rapidly. In contrast, the development of FER in Asian cultures is limited by the data. To narrow this gap, we construct a challenging real-world East Asian facial expression (EAFE) database, which contains 10,000 images collected from 113 Chinese, Japanese, and Korean movies and five search engines. We apply three neural network baselines including VGG-16, ResNet-50, and Inception-V3 to classify the images in EAFE. Then, we conduct two sets of experiments to find the optimal learning rate schedule and loss function. Finally, by training with the cosine learning rate schedule and island loss, ResNet-50 can achieve the best accuracy of 80.53% on the testing set, proving that the database is challenging. In addition, we used the Microsoft Cognitive Face API to extract facial attributes in EAFE, so that the database can also be used for facial recognition and attribute analysis. The release of the EAFE can encourage more research on Asian FER in natural scenes and can also promote the development of FER in cross-cultural domains.
Collapse
Affiliation(s)
- Shanshan Li
- School of Mathematics and Statistics, Shandong University, Weihai 264209, China
| | - Liang Guo
- School of Mathematics and Statistics, Shandong University, Weihai 264209, China
| | - Jianya Liu
- Data Science Institute, Shandong University, Jinan 250100, China
| |
Collapse
|
29
|
Bylianto LO, Chan KQ. Face masks inhibit facial cues for approachability and trustworthiness: an eyetracking study. CURRENT PSYCHOLOGY 2022; 42:1-12. [PMID: 36217421 PMCID: PMC9535231 DOI: 10.1007/s12144-022-03705-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/24/2022] [Indexed: 11/03/2022]
Abstract
Wearing face masks during the Covid-19 pandemic has undeniable benefits from our health perspective. However, the interpersonal costs on social interactions may have been underappreciated. Because masks obscure critical facial regions signaling approach/avoidance intent and social trust, this implies that facial inference of approachability and trustworthiness may be severely discounted. Here, in our eyetracking experiment, we show that people judged masked faces as less approachable and trustworthy. Further analyses showed that the attention directed towards the eye region relative to the mouth region mediated the effect on approachability, but not on trustworthiness. This is because for masked faces, with the mouth region obscured, visual attention is then automatically diverted away from the mouth and towards the eye region, which is an undiagnostic cue for judging a target's approachability. Together, these findings support that mask-wearing inhibits the critical facial cues needed for social judgements. Supplementary Information The online version contains supplementary material available at 10.1007/s12144-022-03705-8.
Collapse
|
30
|
Saurav S, Saini R, Singh S. Fast facial expression recognition using Boosted Histogram of Oriented Gradient (BHOG) features. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01112-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Sleiman E, Mutlu OC, Surabhi S, Husic A, Kline A, Washington P, Wall DP. Deep Learning-Based Autism Spectrum Disorder Detection Using Emotion Features From Video Recordings (Preprint). JMIR BIOMEDICAL ENGINEERING 2022. [DOI: 10.2196/39982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
32
|
Miyazaki Y, Kamatani M, Suda T, Wakasugi K, Matsunaga K, Kawahara JI. Effects of wearing a transparent face mask on perception of facial expressions. Iperception 2022; 13:20416695221105910. [PMID: 35782828 PMCID: PMC9243485 DOI: 10.1177/20416695221105910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 05/22/2022] [Indexed: 11/17/2022] Open
Abstract
Wearing face masks in public has become the norm in many countries post-2020. Although mask-wearing is effective in controlling infection, it has the negative side effect of occluding the mask wearer's facial expressions. The purpose of this study was to investigate the effects of wearing transparent masks on the perception of facial expressions. Participants were required to categorize the perceived facial emotion of female (Experiment 1) and male (Experiment 2) faces with different facial expressions and to rate the perceived emotion intensity of the faces. Based on the group, the participants were assigned to, the faces were presented with a surgical mask, a transparent mask, or without a mask. The results showed that wearing a surgical mask impaired the performance of reading facial expressions, both with respect to recognition and perceived intensity of facial emotions. Specifically, the impairments were robustly observed in fear and happy faces for emotion recognition, and in happy faces for perceived intensity of emotion in Experiments 1 and 2. However, the impairments were moderated by wearing a transparent mask instead of a surgical mask. During the coronavirus disease 2019 (COVID-19) pandemic, the transparent mask can be used in a range of situations where face-to-face communication is important.
Collapse
Affiliation(s)
- Yuki Miyazaki
- />Department of Psychology, Fukuyama University, Fukuyama, Japan
| | - Miki Kamatani
- />Graduate School of Letters, Hokkaido University, Sapporo, Japan
| | | | | | - Kaori Matsunaga
- />Global Research & Development Division, Unicharm Corporation, Kanonji,
Japan
| | - Jun I. Kawahara
- />Graduate School of Letters, Hokkaido University, Sapporo, Japan
| |
Collapse
|
33
|
Yang T, Zhang L, Xu G, Yang Z, Luo Y, Li Z, Zhong K, Shi B, Zhao L, Sun P. Investigating taste sensitivity, chemesthetic sensation and their relationship with emotion perception in Chinese young and older adults. Food Qual Prefer 2022. [DOI: 10.1016/j.foodqual.2021.104406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
34
|
Vorontsova T. The Attitude towards a Stranger and Assessment of his Age based on a Photo Image of a Face Transformed in the FaceApp Application. EXPERIMENTAL PSYCHOLOGY (RUSSIA) 2022. [DOI: 10.17759/exppsy.2022150303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The hypothesis of the study was the assumption that significant differences can be found in the attitude of the subject of perception to the object of perception (“model”) depending on the conditional age stage associated with age-related changes in appearance. Methods: 1) The procedure of “Photovideopresentation of appearance” by T.A. Vorontsova (a set of 36 photos transformed in the FaceApp application); 2) “Methodology for the study of conscious personal relationships to each member of the group and to oneself” by T.A. Vorontsova. Selection: 178 women and 156 men from 21 to 60 years old (M=37.24; SD=10.46). Results: 1) the attitude of subjects of perception to objects of perception significantly changes depending on the conditional age stage associated with changes in appearance: antipathy increases (in 64% of observations); antipathy decreases (in 36% of observations); disrespect increases (in 25% of observations); disrespect decreases (in 75% of observations); distance increases / decreases (50%); 2) gender differences in the dynamics of attitudes towards objects of perception were found: an increase in respect for men, in contrast to the multidirectional dynamics of respect for women. The recorded dynamics of relations reveals benevolent (an increase in respect) and hostile ageism (an increase in antipathy) towards older people who have obvious age-related changes in appearance. Also, the data obtained on the Russian sample confirm the existence of the age stereotype “a woman is getting old, a man is getting mature”. The data are discussed in connection with age stigma, the influence of additional factors, and the possibilities of using FaceApp in scientific research.
Collapse
|
35
|
Zhang S, Liu X, Yang X, Shu Y, Liu N, Zhang D, Liu YJ. The Influence of Key Facial Features on Recognition of Emotion in Cartoon Faces. Front Psychol 2021; 12:687974. [PMID: 34447333 PMCID: PMC8382696 DOI: 10.3389/fpsyg.2021.687974] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 07/13/2021] [Indexed: 12/24/2022] Open
Abstract
Cartoon faces are widely used in social media, animation production, and social robots because of their attractive ability to convey different emotional information. Despite their popular applications, the mechanisms of recognizing emotional expressions in cartoon faces are still unclear. Therefore, three experiments were conducted in this study to systematically explore a recognition process for emotional cartoon expressions (happy, sad, and neutral) and to examine the influence of key facial features (mouth, eyes, and eyebrows) on emotion recognition. Across the experiments, three presentation conditions were employed: (1) a full face; (2) individual feature only (with two other features concealed); and (3) one feature concealed with two other features presented. The cartoon face images used in this study were converted from a set of real faces acted by Chinese posers, and the observers were Chinese. The results show that happy cartoon expressions were recognized more accurately than neutral and sad expressions, which was consistent with the happiness recognition advantage revealed in real face studies. Compared with real facial expressions, sad cartoon expressions were perceived as sadder, and happy cartoon expressions were perceived as less happy, regardless of whether full-face or single facial features were viewed. For cartoon faces, the mouth was demonstrated to be a feature that is sufficient and necessary for the recognition of happiness, and the eyebrows were sufficient and necessary for the recognition of sadness. This study helps to clarify the perception mechanism underlying emotion recognition in cartoon faces and sheds some light on directions for future research on intelligent human-computer interactions.
Collapse
Affiliation(s)
- Shu Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Xinge Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Xuan Yang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Yezhi Shu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Niqi Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Dan Zhang
- Department of Psychology, Tsinghua University, Beijing, China
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China
| | - Yong-Jin Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Beijing, China
- Key Laboratory of Pervasive Computing, Ministry of Education, Beijing, China
| |
Collapse
|
36
|
Mohan SN, Mukhtar F, Jobson L. An Exploratory Study on Cross-Cultural Differences in Facial Emotion Recognition Between Adults From Malaysia and Australia. Front Psychiatry 2021; 12:622077. [PMID: 34177636 PMCID: PMC8219914 DOI: 10.3389/fpsyt.2021.622077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 05/07/2021] [Indexed: 01/29/2023] Open
Abstract
While culture and depression influence the way in which humans process emotion, these two areas of investigation are rarely combined. Therefore, the aim of this study was to investigate the difference in facial emotion recognition among Malaysian Malays and Australians with a European heritage with and without depression. A total of 88 participants took part in this study (Malays n = 47, Australians n = 41). All participants were screened using The Structured Clinical Interview for DSM-5 Clinician Version (SCID-5-CV) to assess the Major Depressive Disorder (MDD) diagnosis and they also completed the Beck Depression Inventory (BDI). This study consisted of the facial emotion recognition (FER) task whereby the participants were asked to look at facial images and determine the emotion depicted by each of the facial expressions. It was found that depression status and cultural group did not significantly influence overall FER accuracy. Malaysian participants without MDD and Australian participants with MDD performed quicker as compared to Australian participants without MDD on the FER task. Also, Malaysian participants more accurately recognized fear as compared to Australian participants. Future studies can focus on the extent of the influence and other aspects of culture and participant condition on facial emotion recognition.
Collapse
Affiliation(s)
- Sindhu Nair Mohan
- Department of Psychiatry, School of Medicine and Health Sciences, Universiti Putra Malaysia, Seri Kembangan, Malaysia
| | - Firdaus Mukhtar
- Department of Psychiatry, School of Medicine and Health Sciences, Universiti Putra Malaysia, Seri Kembangan, Malaysia
| | - Laura Jobson
- School of Psychological Sciences, Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
| |
Collapse
|