1
|
Mulder RH, Bakermans-Kranenburg MJ, Veenstra J, Tiemeier H, van IJzendoorn MH. Facing ostracism: micro-coding facial expressions in the Cyberball social exclusion paradigm. BMC Psychol 2023; 11:185. [PMID: 37337264 DOI: 10.1186/s40359-023-01219-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 05/30/2023] [Indexed: 06/21/2023] Open
Abstract
BACKGROUND Social exclusion is often measured with the Cyberball paradigm, a computerized ball-tossing game. Most Cyberball studies, however, used self-report questionnaires, leaving the data vulnerable to reporter bias, and associations with individual characteristics have been inconsistent. METHODS In this large-scale observational study, we video-recorded 4,813 10-year-old children during Cyberball and developed a real-time micro-coding method measuring facial expressions of anger, sadness and contempt, in a multi-ethnic population-based sample. We estimated associations between facial expressions and self-reported negative feelings, explored associations of child characteristics such as sex and parental national origin with observed and self-reported feelings during social exclusion, and tested associations of observed and self-reported feelings during social exclusion with behavior problems at age 14. RESULTS Facial expressions of sadness and anger were associated with self-reported negative feelings during the game, but not with such feelings after the game. Further, girls reported to have had less negative feelings during the game than boys, but no such sex-differences were found in total observed emotions. Likewise, children with parents of Moroccan origin reported less negative feelings during the game than Dutch children, but their facial expressions did not indicate that they were differently affected. Last, observed emotions related negatively to later internalizing problems, whereas self-report on negative feelings during the game related positively to later internalizing and externalizing problems. CONCLUSIONS We show that facial expressions are associated with self-reported negative feelings during social exclusion, discuss that reporter-bias might be minimized using facial expressions, and find divergent associations of observed facial expressions and self-reported negative feelings with later internalizing problems.
Collapse
Affiliation(s)
- Rosa H Mulder
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Center Rotterdam, Rotterdam, 3000 CB, the Netherlands.
- Generation R Study Group, Erasmus MC, University Medical Center Rotterdam, Rotterdam, the Netherlands.
- Institute of Education and Child Studies, Leiden University, Leiden, the Netherlands.
| | | | - Johan Veenstra
- Institute of Education and Child Studies, Leiden University, Leiden, the Netherlands
| | - Henning Tiemeier
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC, University Medical Center Rotterdam, Rotterdam, 3000 CB, the Netherlands
- Department of Social and Behavioral Science, Harvard T.H. Chan School of Public Health, Boston, USA
| | - Marinus H van IJzendoorn
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, the Netherlands
- Research Department of Clinical, Education and Health Psychology, Faculty of Brain Sciences, UCL, University of London, London, UK
| |
Collapse
|
2
|
Klingner CM, Guntinas-Lichius O. Facial expression and emotion. Laryngorhinootologie 2023; 102:S115-S125. [PMID: 37130535 PMCID: PMC10171334 DOI: 10.1055/a-2003-5687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Human facial expressions are unique in their ability to express our emotions and communicate them to others. The mimic expression of basic emotions is very similar across different cultures and has also many features in common with other mammals. This suggests a common genetic origin of the association between facial expressions and emotion. However, recent studies also show cultural influences and differences. The recognition of emotions from facial expressions, as well as the process of expressing one's emotions facially, occurs within an extremely complex cerebral network. Due to the complexity of the cerebral processing system, there are a variety of neurological and psychiatric disorders that can significantly disrupt the coupling of facial expressions and emotions. Wearing masks also limits our ability to convey and recognize emotions through facial expressions. Through facial expressions, however, not only "real" emotions can be expressed, but also acted ones. Thus, facial expressions open up the possibility of faking socially desired expressions and also of consciously faking emotions. However, these pretenses are mostly imperfect and can be accompanied by short-term facial movements that indicate the emotions that are actually present (microexpressions). These microexpressions are of very short duration and often barely perceptible by humans, but they are the ideal application area for computer-aided analysis. This automatic identification of microexpressions has not only received scientific attention in recent years, but its use is also being tested in security-related areas. This article summarizes the current state of knowledge of facial expressions and emotions.
Collapse
Affiliation(s)
- Carsten M Klingner
- Hans Berger Department of Neurology, Jena University Hospital, Germany
- Biomagnetic Center, Jena University Hospital, Germany
| | | |
Collapse
|
3
|
Büdenbender B, Höfling TTA, Gerdes ABM, Alpers GW. Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions' prototypicality. PLoS One 2023; 18:e0281309. [PMID: 36763694 PMCID: PMC9916590 DOI: 10.1371/journal.pone.0281309] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/20/2023] [Indexed: 02/12/2023] Open
Abstract
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models' accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material's prototypicality on AFC's ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Collapse
Affiliation(s)
- Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Tim T. A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
- * E-mail:
| |
Collapse
|
4
|
A Survey of Micro-expression Recognition Methods Based on LBP, Optical Flow and Deep Learning. Neural Process Lett 2023. [DOI: 10.1007/s11063-022-11123-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
5
|
Concordance between Facial Micro-expressions and Physiological Signals under Emotion Elicitation. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
6
|
Ben X, Ren Y, Zhang J, Wang SJ, Kpalma K, Meng W, Liu YJ. Video-Based Facial Micro-Expression Analysis: A Survey of Datasets, Features and Algorithms. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:5826-5846. [PMID: 33739920 DOI: 10.1109/tpami.2021.3067464] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Unlike the conventional facial expressions, micro-expressions are involuntary and transient facial expressions capable of revealing the genuine emotions that people attempt to hide. Therefore, they can provide important information in a broad range of applications such as lie detection, criminal detection, etc. Since micro-expressions are transient and of low intensity, however, their detection and recognition is difficult and relies heavily on expert experiences. Due to its intrinsic particularity and complexity, video-based micro-expression analysis is attractive but challenging, and has recently become an active area of research. Although there have been numerous developments in this area, thus far there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences between macro- and micro-expressions, then use these differences to guide our research survey of video-based micro-expression analysis in a cascaded structure, encompassing the neuropsychological basis, datasets, features, spotting algorithms, recognition algorithms, applications and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are addressed and discussed. Furthermore, after considering the limitations of existing micro-expression datasets, we present and release a new dataset - called micro-and-macro expression warehouse (MMEW) - containing more video samples and more labeled emotion types. We then perform a unified comparison of representative methods on CAS(ME) 2 for spotting, and on MMEW and SAMM for recognition, respectively. Finally, some potential future research directions are explored and outlined.
Collapse
|
7
|
Gan Y, See J, Khor HQ, Liu KH, Liong ST. Needle in a Haystack: Spotting and recognising micro-expressions “in the wild”. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Saffaryazdi N, Wasim ST, Dileep K, Nia AF, Nanayakkara S, Broadbent E, Billinghurst M. Using Facial Micro-Expressions in Combination With EEG and Physiological Signals for Emotion Recognition. Front Psychol 2022; 13:864047. [PMID: 35837650 PMCID: PMC9275379 DOI: 10.3389/fpsyg.2022.864047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
Emotions are multimodal processes that play a crucial role in our everyday lives. Recognizing emotions is becoming more critical in a wide range of application domains such as healthcare, education, human-computer interaction, Virtual Reality, intelligent agents, entertainment, and more. Facial macro-expressions or intense facial expressions are the most common modalities in recognizing emotional states. However, since facial expressions can be voluntarily controlled, they may not accurately represent emotional states. Earlier studies have shown that facial micro-expressions are more reliable than facial macro-expressions for revealing emotions. They are subtle, involuntary movements responding to external stimuli that cannot be controlled. This paper proposes using facial micro-expressions combined with brain and physiological signals to more reliably detect underlying emotions. We describe our models for measuring arousal and valence levels from a combination of facial micro-expressions, Electroencephalography (EEG) signals, galvanic skin responses (GSR), and Photoplethysmography (PPG) signals. We then evaluate our model using the DEAP dataset and our own dataset based on a subject-independent approach. Lastly, we discuss our results, the limitations of our work, and how these limitations could be overcome. We also discuss future directions for using facial micro-expressions and physiological signals in emotion recognition.
Collapse
Affiliation(s)
- Nastaran Saffaryazdi
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Syed Talal Wasim
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Kuldeep Dileep
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Alireza Farrokhi Nia
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Suranga Nanayakkara
- Augmented Human Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Elizabeth Broadbent
- Department of Psychological Medicine, The University of Auckland, Auckland, New Zealand
| | - Mark Billinghurst
- Empathic Computing Laboratory, Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
9
|
PERSIST: Improving micro-expression spotting using better feature encodings and multi-scale Gaussian TCN. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03553-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2022]
|
10
|
Deep learning-based microexpression recognition: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07157-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
11
|
Driver Emotions Recognition Based on Improved Faster R-CNN and Neural Architectural Search Network. Symmetry (Basel) 2022. [DOI: 10.3390/sym14040687] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
It is critical for intelligent vehicles to be capable of monitoring the health and well-being of the drivers they transport on a continuous basis. This is especially true in the case of autonomous vehicles. To address the issue, an automatic system is developed for driver’s real emotion recognizer (DRER) using deep learning. The emotional values of drivers in indoor vehicles are symmetrically mapped to image design in order to investigate the characteristics of abstract expressions, expression design principles, and an experimental evaluation is conducted based on existing research on the design of driver facial expressions for intelligent products. By substituting a custom-created CNN features learning block with the base 11 layers CNN model in this paper for the development of an improved faster R-CNN face detector that detects the driver’s face at a high frame per second (FPS). Transfer learning is performed in the NasNet large CNN model in order to recognize the driver’s various emotions. Additionally, a custom driver emotion recognition image dataset is being developed as part of this research task. The proposed model, which is a combination of an improved faster R-CNN and transfer learning in NasNet-Large CNN architecture for DER based on facial images, enables greater accuracy than previously possible for DER based on facial images. The proposed model outperforms some recently updated state-of-the-art techniques in terms of accuracy. The proposed model achieved the following accuracy on various benchmark datasets: JAFFE 98.48%, CK+ 99.73%, FER-2013 99.95%, AffectNet 95.28%, and 99.15% on a custom-developed dataset.
Collapse
|
12
|
Yap CH, Cunningham R, Davison AK, Yap MH. Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer. J Imaging 2021; 7:142. [PMID: 34460778 PMCID: PMC8404916 DOI: 10.3390/jimaging7080142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/01/2021] [Accepted: 08/06/2021] [Indexed: 11/17/2022] Open
Abstract
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method-StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson's correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.
Collapse
Affiliation(s)
- Chuin Hong Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester M15 6BH, UK; (R.C.); (M.H.Y.)
| | - Ryan Cunningham
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester M15 6BH, UK; (R.C.); (M.H.Y.)
| | - Adrian K. Davison
- Faculty of Biology, Medicine and Health, The University of Manchester, Manchester M13 9PL, UK;
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester M15 6BH, UK; (R.C.); (M.H.Y.)
| |
Collapse
|
13
|
|
14
|
|
15
|
Das A, Mock J, Huang Y, Golob E, Najafirad P. Interpretable Self-Supervised Facial Micro-Expression Learning to Predict Cognitive State and Neurological Disorders. PROCEEDINGS OF THE ... AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE. AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE 2021; 35:818-826. [PMID: 34221694 PMCID: PMC8252663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Human behavior is the confluence of output from voluntary and involuntary motor systems. The neural activities that mediate behavior, from individual cells to distributed networks, are in a state of constant flux. Artificial intelligence (AI) research over the past decade shows that behavior, in the form of facial muscle activity, can reveal information about fleeting voluntary and involuntary motor system activity related to emotion, pain, and deception. However, the AI algorithms often lack an explanation for their decisions, and learning meaningful representations requires large datasets labeled by a subject-matter expert. Motivated by the success of using facial muscle movements to classify brain states and the importance of learning from small amounts of data, we propose an explainable self-supervised representation-learning paradigm that learns meaningful temporal facial muscle movement patterns from limited samples. We validate our methodology by carrying out comprehensive empirical study to predict future speech behavior in a real-world dataset of adults who stutter (AWS). Our explainability study found facial muscle movements around the eyes (p <0.×001) and lips (p <0.001) differ significantly before producing fluent vs. disfluent speech. Evaluations using the AWS dataset demonstrates that the proposed self-supervised approach achieves a minimum of 2.51% accuracy improvement over fully-supervised approaches.
Collapse
Affiliation(s)
- Arun Das
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio
| | - Jeffrey Mock
- Cognitive Neuroscience Laboratory, University of Texas at San Antonio
| | - Yufei Huang
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio
| | - Edward Golob
- Cognitive Neuroscience Laboratory, University of Texas at San Antonio
| | - Peyman Najafirad
- Secure AI and Autonomy Laboratory, University of Texas at San Antonio
| |
Collapse
|
16
|
Balconi M, Fronda G. How to Induce and Recognize Facial Expression of Emotions by Using Past Emotional Memories: A Multimodal Neuroscientific Algorithm. Front Psychol 2021; 12:619590. [PMID: 34040557 PMCID: PMC8141597 DOI: 10.3389/fpsyg.2021.619590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 04/14/2021] [Indexed: 11/16/2022] Open
Affiliation(s)
- Michela Balconi
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Catholic University of the Sacred Heart, Milan, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| | - Giulia Fronda
- International Research Center for Cognitive Applied Neuroscience (IrcCAN), Catholic University of the Sacred Heart, Milan, Italy
- Research Unit in Affective and Social Neuroscience, Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
| |
Collapse
|
17
|
Review of Automatic Microexpression Recognition in the Past Decade. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2021. [DOI: 10.3390/make3020021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Facial expressions provide important information concerning one’s emotional state. Unlike regular facial expressions, microexpressions are particular kinds of small quick facial movements, which generally last only 0.05 to 0.2 s. They reflect individuals’ subjective emotions and real psychological states more accurately than regular expressions which can be acted. However, the small range and short duration of facial movements when microexpressions happen make them challenging to recognize both by humans and machines alike. In the past decade, automatic microexpression recognition has attracted the attention of researchers in psychology, computer science, and security, amongst others. In addition, a number of specialized microexpression databases have been collected and made publicly available. The purpose of this article is to provide a comprehensive overview of the current state of the art automatic facial microexpression recognition work. To be specific, the features and learning methods used in automatic microexpression recognition, the existing microexpression data sets, the major outstanding challenges, and possible future development directions are all discussed.
Collapse
|
18
|
Li X, Fan F, Chen X, Li J, Ning L, Lin K, Chen Z, Qin Z, Yeung AS, Li X, Wang L, So KF. Computer Vision for Brain Disorders Based Primarily on Ocular Responses. Front Neurol 2021; 12:584270. [PMID: 33967931 PMCID: PMC8096911 DOI: 10.3389/fneur.2021.584270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 03/15/2021] [Indexed: 11/18/2022] Open
Abstract
Real-time ocular responses are tightly associated with emotional and cognitive processing within the central nervous system. Patterns seen in saccades, pupillary responses, and spontaneous blinking, as well as retinal microvasculature and morphology visualized via office-based ophthalmic imaging, are potential biomarkers for the screening and evaluation of cognitive and psychiatric disorders. In this review, we outline multiple techniques in which ocular assessments may serve as a non-invasive approach for the early detections of various brain disorders, such as autism spectrum disorder (ASD), Alzheimer's disease (AD), schizophrenia (SZ), and major depressive disorder (MDD). In addition, rapid advances in artificial intelligence (AI) present a growing opportunity to use machine learning-based AI, especially computer vision (CV) with deep-learning neural networks, to shed new light on the field of cognitive neuroscience, which is most likely to lead to novel evaluations and interventions for brain disorders. Hence, we highlight the potential of using AI to evaluate brain disorders based primarily on ocular features.
Collapse
Affiliation(s)
- Xiaotao Li
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States.,BIAI INC., Chelmsford, MA, United States.,BIAI Intelligence Biotech LLC, Shenzhen, China
| | - Fangfang Fan
- Department of Neurology, Harvard Medical School, Harvard University, Boston, MA, United States
| | - Xuejing Chen
- Retina Division, Department of Ophthalmology, Boston University Eye Associates, Boston University, Boston, MA, United States
| | - Juan Li
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China.,BIAI INC., Chelmsford, MA, United States.,BIAI Intelligence Biotech LLC, Shenzhen, China
| | - Li Ning
- Center for High Performance Computing, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Kangguang Lin
- Department of Affective Disorders and Academician Workstation of Mood and Brain Sciences, The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, China.,Guangdong-Hong Kong-Macau Institute of Central Nervous System (CNS) Regeneration, Jinan University, Guangzhou, China
| | - Zan Chen
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China
| | - Zhenyun Qin
- Key Laboratory for Nonlinear Mathematical Models and Methods, School of Mathematical Science, Fudan University, Shanghai, China
| | - Albert S Yeung
- Depression Clinical and Research Program, Department of Psychiatry, Massachusetts General Hospital, Boston, MA, United States
| | - Xiaojian Li
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China
| | - Liping Wang
- Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China
| | - Kwok-Fai So
- Guangdong-Hong Kong-Macau Institute of Central Nervous System (CNS) Regeneration, Jinan University, Guangzhou, China.,The State Key Laboratory of Brain and Cognitive Sciences, Department of Ophthalmology, University of Hong Kong, Pok Fu Lam, Hong Kong
| |
Collapse
|
19
|
A Fast Preprocessing Method for Micro-Expression Spotting via Perceptual Detection of Frozen Frames. J Imaging 2021; 7:jimaging7040068. [PMID: 34460518 PMCID: PMC8321339 DOI: 10.3390/jimaging7040068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/23/2021] [Accepted: 03/30/2021] [Indexed: 11/17/2022] Open
Abstract
This paper presents a preliminary study concerning a fast preprocessing method for facial microexpression (ME) spotting in video sequences. The rationale is to detect frames containing frozen expressions as a quick warning for the presence of MEs. In fact, those frames can either precede or follow (or both) MEs according to ME type and the subject's reaction. To that end, inspired by the Adelson-Bergen motion energy model and the instinctive nature of the preattentive vision, global visual perception-based features were employed for the detection of frozen frames. Preliminary results achieved on both controlled and uncontrolled videos confirmed that the proposed method is able to correctly detect frozen frames and those revealing the presence of nearby MEs-independently of ME kind and facial region. This property can then contribute to speeding up and simplifying the ME spotting process, especially during long video acquisitions.
Collapse
|
20
|
Oh G, Ryu J, Jeong E, Yang JH, Hwang S, Lee S, Lim S. DRER: Deep Learning-Based Driver's Real Emotion Recognizer. SENSORS 2021; 21:s21062166. [PMID: 33808922 PMCID: PMC8003797 DOI: 10.3390/s21062166] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 12/18/2022]
Abstract
In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.
Collapse
Affiliation(s)
- Geesung Oh
- Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea; (G.O.); (J.R.); (E.J.)
| | - Junghwan Ryu
- Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea; (G.O.); (J.R.); (E.J.)
| | - Euiseok Jeong
- Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea; (G.O.); (J.R.); (E.J.)
| | - Ji Hyun Yang
- Department of Automobile and IT Convergence, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea;
| | - Sungwook Hwang
- Chassis System Control Research Lab, Hyundai Motor Group, Hwaseong 18280, Korea; (S.H.); (S.L.)
| | - Sangho Lee
- Chassis System Control Research Lab, Hyundai Motor Group, Hwaseong 18280, Korea; (S.H.); (S.L.)
| | - Sejoon Lim
- Department of Automobile and IT Convergence, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea;
- Correspondence: ; Tel.: +82-2-910-5469
| |
Collapse
|
21
|
Spinazze P, Aardoom J, Chavannes N, Kasteleyn M. The Computer Will See You Now: Overcoming Barriers to Adoption of Computer-Assisted History Taking (CAHT) in Primary Care. J Med Internet Res 2021; 23:e19306. [PMID: 33625360 PMCID: PMC7946588 DOI: 10.2196/19306] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 12/23/2020] [Accepted: 01/24/2021] [Indexed: 01/10/2023] Open
Abstract
Patient health information is increasingly collected through multiple modalities, including electronic health records, wearables, and connected devices. Computer-assisted history taking could provide an additional channel to collect highly relevant, comprehensive, and accurate patient information while reducing the burden on clinicians and face-to-face consultation time. Considering restrictions to consultation time and the associated negative health outcomes, patient-provided health data outside of consultation can prove invaluable in health care delivery. Over the years, research has highlighted the numerous benefits of computer-assisted history taking; however, the limitations have proved an obstacle to adoption. In this viewpoint, we review these limitations under 4 main categories (accessibility, affordability, accuracy, and acceptability) and discuss how advances in technology, computing power, and ubiquity of personal devices offer solutions to overcoming these.
Collapse
Affiliation(s)
- Pier Spinazze
- Global Digital Health Unit, Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, United Kingdom
| | - Jiska Aardoom
- Department of Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
| | - Niels Chavannes
- Department of Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
| | - Marise Kasteleyn
- Department of Public Health and Primary Care, Leiden University Medical Center, Leiden, Netherlands
| |
Collapse
|
22
|
Nie X, Takalkar MA, Duan M, Zhang H, Xu M. GEME: Dual-stream multi-task GEnder-based micro-expression recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.082] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
23
|
Steinmair D, Löffler-Stastka H. The Emerging Role of Interdisciplinarity in Clinical Psychoanalysis. Front Psychol 2021; 12:659429. [PMID: 34025523 PMCID: PMC8131672 DOI: 10.3389/fpsyg.2021.659429] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/12/2021] [Indexed: 02/05/2023] Open
Abstract
Given the tight interconnections proposed between brain and psyche, psychoanalysis was conceptualized as an interdisciplinary theory right from the beginning. The diversification of knowledge performed by different science and technology fields, concerned with the same matter (explaining mind and brain and connecting them), makes this interdisciplinarity even more visible and evident. This challenges the integrative potential lying in psychoanalytic meta-theory.
Collapse
Affiliation(s)
- Dagmar Steinmair
- Department of Psychoanalysis and Psychotherapy, Medical University of Vienna, Vienna, Austria
- Karl Landsteiner Private University for Health Sciences, Krems an der Donau, Austria
| | - Henriette Löffler-Stastka
- Department of Psychoanalysis and Psychotherapy, Medical University of Vienna, Vienna, Austria
- *Correspondence: Henriette Löffler-Stastka,
| |
Collapse
|
24
|
Masson A, Cazenave G, Trombini J, Batt M. The current challenges of automatic recognition of facial expressions: A systematic review. AI COMMUN 2020. [DOI: 10.3233/aic-200631] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In recent years, due to its great economic and social potential, the recognition of facial expressions linked to emotions has become one of the most flourishing applications in the field of artificial intelligence, and has been the subject of many developments. However, despite significant progress, this field is still subject to many theoretical debates and technical challenges. It therefore seems important to make a general inventory of the different lines of research and to present a synthesis of recent results in this field. To this end, we have carried out a systematic review of the literature according to the guidelines of the PRISMA method. A search of 13 documentary databases identified a total of 220 references over the period 2014–2019. After a global presentation of the current systems and their performance, we grouped and analyzed the selected articles in the light of the main problems encountered in the field of automated facial expression recognition. The conclusion of this review highlights the strengths, limitations and main directions for future research in this field.
Collapse
Affiliation(s)
- Audrey Masson
- Interpsy – GRC, University of Lorraine, France. E-mails: ,
- Two-I, France. E-mails: ,
| | | | | | - Martine Batt
- Interpsy – GRC, University of Lorraine, France. E-mails: ,
| |
Collapse
|
25
|
FACS-Based Graph Features for Real-Time Micro-Expression Recognition. J Imaging 2020; 6:jimaging6120130. [PMID: 34460527 PMCID: PMC8321161 DOI: 10.3390/jimaging6120130] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 11/17/2020] [Accepted: 11/27/2020] [Indexed: 11/17/2022] Open
Abstract
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine.
Collapse
|
26
|
Enhanced emotional and motor responses to live versus videotaped dynamic facial expressions. Sci Rep 2020; 10:16825. [PMID: 33033355 PMCID: PMC7544832 DOI: 10.1038/s41598-020-73826-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Accepted: 09/22/2020] [Indexed: 11/08/2022] Open
Abstract
Facial expression is an integral aspect of non-verbal communication of affective information. Earlier psychological studies have reported that the presentation of prerecorded photographs or videos of emotional facial expressions automatically elicits divergent responses, such as emotions and facial mimicry. However, such highly controlled experimental procedures may lack the vividness of real-life social interactions. This study incorporated a live image relay system that delivered models' real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. We measured subjective ratings of valence and arousal and facial electromyography (EMG) activity in the zygomaticus major and corrugator supercilii muscles. Subjective ratings showed that the live facial expressions were rated to elicit higher valence and more arousing than the corresponding videos for positive emotion conditions. Facial EMG data showed that compared with the video, live facial expressions more effectively elicited facial muscular activity congruent with the models' positive facial expressions. The findings indicate that emotional facial expressions in live social interactions are more evocative of emotional reactions and facial mimicry than earlier experimental data have suggested.
Collapse
|
27
|
Tanfous AB, Drira H, Amor BB. Sparse Coding of Shape Trajectories for Facial Expression and Action Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:2594-2607. [PMID: 31395537 DOI: 10.1109/tpami.2019.2932979] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The detection and tracking of human landmarks in video streams has gained in reliability partly due to the availability of affordable RGB-D sensors. The analysis of such time-varying geometric data is playing an important role in the automatic human behavior understanding. However, suitable shape representations as well as their temporal evolution, termed trajectories, often lie to nonlinear manifolds. This puts an additional constraint (i.e., nonlinearity) in using conventional Machine Learning techniques. As a solution, this paper accommodates the well-known Sparse Coding and Dictionary Learning approach to study time-varying shapes on the Kendall shape spaces of 2D and 3D landmarks. We illustrate effective coding of 3D skeletal sequences for action recognition and 2D facial landmark sequences for macro- and micro-expression recognition. To overcome the inherent nonlinearity of the shape spaces, intrinsic and extrinsic solutions were explored. As main results, shape trajectories give rise to more discriminative time-series with suitable computational properties, including sparsity and vector space structure. Extensive experiments conducted on commonly-used datasets demonstrate the competitiveness of the proposed approaches with respect to state-of-the-art.
Collapse
|
28
|
Cen S, Yu Y, Yan G, Yu M, Yang Q. Sparse Spatiotemporal Descriptor for Micro-Expression Recognition Using Enhanced Local Cube Binary Pattern. SENSORS 2020; 20:s20164437. [PMID: 32784460 PMCID: PMC7471998 DOI: 10.3390/s20164437] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 08/04/2020] [Accepted: 08/05/2020] [Indexed: 11/19/2022]
Abstract
As a spontaneous facial expression, a micro-expression can reveal the psychological responses of human beings. Thus, micro-expression recognition can be widely studied and applied for its potentiality in clinical diagnosis, psychological research, and security. However, micro-expression recognition is a formidable challenge due to the short-lived time frame and low-intensity of the facial actions. In this paper, a sparse spatiotemporal descriptor for micro-expression recognition is developed by using the Enhanced Local Cube Binary Pattern (Enhanced LCBP). The proposed Enhanced LCBP is composed of three complementary binary features containing Spatial Difference Local Cube Binary Patterns (Spatial Difference LCBP), Temporal Direction Local Cube Binary Patterns (Temporal Direction LCBP), and Temporal Gradient Local Cube Binary Patterns (Temporal Gradient LCBP). With the application of Enhanced LCBP, it would no longer be a problem to provide binary features with spatiotemporal domain complementarity to capture subtle facial changes. In addition, due to the redundant information existing among the division grids, which affects the ability of descriptors to distinguish micro-expressions, the Multi-Regional Joint Sparse Learning is designed to perform feature selection for the division grids, thus paying more attention to the critical local regions. Finally, the Multi-kernel Support Vector Machine (SVM) is employed to fuse the selected features for the final classification. The proposed method exhibits great advantage and achieves promising results on four spontaneous micro-expression datasets. Through further observation of parameter evaluation and confusion matrix, the sufficiency and effectiveness of the proposed method are proved.
Collapse
Affiliation(s)
- Shixin Cen
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (S.C.); (Q.Y.)
| | - Yang Yu
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China; (Y.Y.); (G.Y.)
| | - Gang Yan
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China; (Y.Y.); (G.Y.)
| | - Ming Yu
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (S.C.); (Q.Y.)
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China; (Y.Y.); (G.Y.)
- Correspondence: ; Tel.: +86-137-0217-3627
| | - Qing Yang
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (S.C.); (Q.Y.)
- Department of Electronic and Optical Engineering, Army Engineering University Shijiazhuang Campus, Shijiazhuang 050000, China
| |
Collapse
|
29
|
Duque CA, Alata O, Emonet R, Konik H, Legrand AC. Mean oriented Riesz features for micro expression classification. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.05.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
30
|
Cognitive Ergonomics Evaluation Assisted by an Intelligent Emotion Recognition Technique. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051736] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The study of the cognitive effects caused by work activities are vital to ensure the well-being of a worker, and this work presents a strategy to analyze these effects while they are carrying out their activities. Our proposal is based on the implementation of pattern recognition techniques to identify emotions in facial expressions and correlate them to a proposed situation awareness model that measures the levels of comfort and mental stability of a worker and proposes corrective actions. We present the experimental results that could not be collected through traditional techniques since we carry out a continuous and uninterrupted assessment of the cognitive situation of a worker.
Collapse
|
31
|
Abstract
Measuring facial traits by quantitative means is a prerequisite to investigate epidemiological, clinical, and forensic questions. This measurement process has received intense attention in recent years. We divided this process into the registration of the face, landmarking, morphometric quantification, and dimension reduction. Face registration is the process of standardizing pose and landmarking annotates positions in the face with anatomic description or mathematically defined properties (pseudolandmarks). Morphometric quantification computes pre-specified transformations such as distances. Landmarking: We review face registration methods which are required by some landmarking methods. Although similar, face registration and landmarking are distinct problems. The registration phase can be seen as a pre-processing step and can be combined independently with a landmarking solution. Existing approaches for landmarking differ in their data requirements, modeling approach, and training complexity. In this review, we focus on 3D surface data as captured by commercial surface scanners but also cover methods for 2D facial pictures, when methodology overlaps. We discuss the broad categories of active shape models, template based approaches, recent deep-learning algorithms, and variations thereof such as hybrid algorithms. The type of algorithm chosen depends on the availability of pre-trained models for the data at hand, availability of an appropriate landmark set, accuracy characteristics, and training complexity. Quantification: Landmarking of anatomical landmarks is usually augmented by pseudo-landmarks, i.e., indirectly defined landmarks that densely cover the scan surface. Such a rich data set is not amenable to direct analysis but is reduced in dimensionality for downstream analysis. We review classic dimension reduction techniques used for facial data and face specific measures, such as geometric measurements and manifold learning. Finally, we review symmetry registration and discuss reliability.
Collapse
Affiliation(s)
- Stefan Böhringer
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands
| | | |
Collapse
|