1
|
Vráblová V, Halamová J. Characteristics of vocal cues, facial action units, and emotions that distinguish high from low self-protection participants engaged in self-protective response to self-criticizing. Front Psychol 2025; 15:1363993. [PMID: 39881704 PMCID: PMC11774916 DOI: 10.3389/fpsyg.2024.1363993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 12/03/2024] [Indexed: 01/31/2025] Open
Abstract
Introduction Self-protection, also called protective anger or assertive anger, is a key factor in mental health. Thus, far, researchers have focused mainly on the qualitative analysis of self-protection. Methods Therefore, we investigated facial action units, emotions, and vocal cues in low and high self-protective groups of participants in order to detect any differences. The total sample consisted of 239 participants. Using the Performance factor in the Short version of the Scale for Interpersonal Behavior (lower 15th percentile and upper 15th percentile) we selected 33 high self-protective participants (11 men, 22 women) and 25 low self-protective participants (eight men, 17 women). The self-protective dialogue was recorded using the two-chair technique script from Emotion Focused Therapy. The subsequent analysis was performed using iMotions software (for action units and emotions) and Praat software (for vocal cues of pitch and intensity). We used multilevel models in program R for the statistical analysis. Results Compared to low self-protective participants, high self-protective participants exhibited more contempt and fear and less surprise and joy. Compared to low self-protective participants, high self-protective participants expressed the action units the following action units less often: Mouth Open (AU25), Smile (AU12), Brow Raise (AU2), Cheek Raise (AU6), Inner Brow Raise (AU1), and more often Brow Furrow (AU4), Chin Raise (AU17), Smirk (AU12), Upper Lip Raise (AU10), and Nose Wrinkle (AU9). We found no differences between the two groups in the use of vocal cues. Discussion These findings bring us closer to understanding and diagnosing self-protection.
Collapse
|
2
|
Keinert M, Schindler-Gmelch L, Rupp LH, Sadeghi M, Capito K, Hager M, Rahimi F, Richer R, Egger B, Eskofier BM, Berking M. Facing depression: evaluating the efficacy of the EmpkinS-EKSpression reappraisal training augmented with facial expressions - protocol of a randomized controlled trial. BMC Psychiatry 2024; 24:896. [PMID: 39668374 PMCID: PMC11636037 DOI: 10.1186/s12888-024-06361-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024] Open
Abstract
BACKGROUND Dysfunctional depressogenic cognitions are considered a key factor in the etiology and maintenance of depression. In cognitive behavioral therapy (CBT), the current gold-standard psychotherapeutic treatment for depression, cognitive restructuring techniques are employed to address dysfunctional cognitions. However, high drop-out and non-response rates suggest a need to boost the efficacy of CBT for depression. This might be achieved by enhancing the role of emotional and kinesthetic (i.e., body movement perception) features of interventions. Therefore, we aim to evaluate the efficacy of a cognitive restructuring task augmented with the performance of anti-depressive facial expressions in individuals with and without depression. Further, we aim to investigate to what extent kinesthetic markers are intrinsically associated with and, hence, allow for the detection of, depression. METHODS In a four-arm, parallel, single-blind, randomized controlled trial (RCT), we will randomize 128 individuals with depression and 128 matched controls without depression to one of four study conditions: (1) a cognitive reappraisal training (CR); (2) CR enhanced with instructions to display anti-depressive facial expressions (CR + AFE); (3) facial muscle training focusing on anti-depressive facial expressions (AFE); and (4) a sham control condition. One week after diagnostic assessment, a single intervention of 90-120-minute duration will be administered, with a subsequent follow-up two weeks later. Depressed mood will serve as primary outcome. Secondary outcomes will include current positive mood, symptoms of depression, current suicidality, dysfunctional attitudes, automatic thoughts, emotional state, kinesthesia (i.e., facial expression, facial muscle activity, body posture), psychophysiological measures (e.g., heart rate (variability), respiration rate (variability), verbal acoustics), as well as feasibility measures (i.e., treatment integrity, compliance, usability, acceptability). Outcomes will be analyzed with multiple methods, such as hierarchical and conventional linear models and machine learning. DISCUSSION If shown to be feasible and effective, the inclusion of kinesthesia into both psychotherapeutic diagnostics and interventions may be a pivotal step towards the more prompt, efficient, and targeted treatment of individuals with depression. TRIAL REGISTRATION The study was preregistered in the Open Science Framework on August 12, 2022 ( https://osf.io/mswfg/ ) and retrospectively registered in the German Clinical Trials Register on November 25, 2024. CLINICAL TRIAL NUMBER DRKS00035577.
Collapse
Affiliation(s)
- Marie Keinert
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Nägelsbachstraße 25a, 91052, Erlangen, Germany.
| | - Lena Schindler-Gmelch
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Lydia Helene Rupp
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Misha Sadeghi
- Machine Learning and Data Analytics Lab, Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Klara Capito
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Malin Hager
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Farnaz Rahimi
- Machine Learning and Data Analytics Lab, Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Robert Richer
- Machine Learning and Data Analytics Lab, Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Bernhard Egger
- Chair of Visual Computing, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Bjoern M Eskofier
- Machine Learning and Data Analytics Lab, Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- Translational Digital Health Group, Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, 85764, Neuherberg, Germany
| | - Matthias Berking
- Department of Clinical Psychology and Psychotherapy, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
3
|
Yan K, Miao S, Jin X, Mu Y, Zheng H, Tian Y, Wang P, Yu Q, Hu D. TCEDN: A Lightweight Time-Context Enhanced Depression Detection Network. Life (Basel) 2024; 14:1313. [PMID: 39459613 PMCID: PMC11509182 DOI: 10.3390/life14101313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Revised: 10/14/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024] Open
Abstract
The automatic video recognition of depression is becoming increasingly important in clinical applications. However, traditional depression recognition models still face challenges in practical applications, such as high computational costs, the poor application effectiveness of facial movement features, and spatial feature degradation due to model stitching. To overcome these challenges, this work proposes a lightweight Time-Context Enhanced Depression Detection Network (TCEDN). We first use attention-weighted blocks to aggregate and enhance video frame-level features, easing the model's computational workload. Next, by integrating the temporal and spatial changes of video raw features and facial movement features in a self-learning weight manner, we enhance the precision of depression detection. Finally, a fusion network of 3-Dimensional Convolutional Neural Network (3D-CNN) and Convolutional Long Short-Term Memory Network (ConvLSTM) is constructed to minimize spatial feature loss by avoiding feature flattening and to achieve depression score prediction. Tests on the AVEC2013 and AVEC2014 datasets reveal that our approach yields results on par with state-of-the-art techniques for detecting depression using video analysis. Additionally, our method has significantly lower computational complexity than mainstream methods.
Collapse
Affiliation(s)
- Keshan Yan
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Shengfa Miao
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Yongkang Mu
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Hongfeng Zheng
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Yuling Tian
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Puming Wang
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Qian Yu
- School of Software, Yunnan University, Kunming 650000, China; (K.Y.); (Y.M.)
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, China
| | - Da Hu
- Fengtu Technology (Shenzhen) Co., Ltd., Shenzhen 518057, China
| |
Collapse
|
4
|
Hu X, Gao J, Romano D. Exploring the effects of multiple online interaction on emotions of L2 learners in synchronous online classes. Heliyon 2024; 10:e37619. [PMID: 39309791 PMCID: PMC11416288 DOI: 10.1016/j.heliyon.2024.e37619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/30/2024] [Accepted: 09/06/2024] [Indexed: 09/25/2024] Open
Abstract
The decline in both the quantity and quality of interaction has emerged as a notable challenge in online learning. However, the definition of interaction quality remains unclear. This study clarifies it as a decrease in the breadth of interaction, which refers to interaction that can only cover a smaller number of learners. To address this, a synchronous interaction modality, termed Multiple Online Interaction (MOI), based on Zoom's interactive tools, was introduced. In a quasi-experiment involving 58 Chinese L2 learners (with 30 beginner and 28 intermediate students), emotions were assessed using the Brief Mood Introspection Scale (BMIS) while real-time emotional dynamics were revealed through the analysis of 5129 facial expression images during a 35-min synchronous class. MOI participants reported higher levels of Lively and Happy but also experienced more Nervous and less Calm. These emotional dynamics, tracked through expression recognition technology, demonstrate that MOI's impact is primarily observed during the first Grammar & Practice section of the teaching. The empirical findings of this study provide practical insights for educators aiming to conduct effective online teaching in the future.
Collapse
Affiliation(s)
- Xuhui Hu
- School of International Education, Shantou University, China
- School of Chinese as a Second Language, Peking University, China
| | - Jian Gao
- School of International Education, Shantou University, China
| | - Daniela Romano
- Institute of Artificial Intelligence, De Montfort University, United Kingdom
- Department of Information studies, University College London, United Kingdom
| |
Collapse
|
5
|
Slonim DA, Yehezkel I, Paz A, Bar-Kalifa E, Wolff M, Dar A, Gilboa-Schechtman E. Facing Change: Using Automated Facial Expression Analysis to Examine Emotional Flexibility in the Treatment of Depression. ADMINISTRATION AND POLICY IN MENTAL HEALTH AND MENTAL HEALTH SERVICES RESEARCH 2024; 51:501-508. [PMID: 37880472 DOI: 10.1007/s10488-023-01310-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 10/27/2023]
Abstract
OBJECTIVE Depression involves deficits in emotional flexibility. To date, the varied and dynamic nature of emotional processes during therapy has mostly been measured at discrete time intervals using clients' subjective reports. Because emotions tend to fluctuate and change from moment to moment, the understanding of emotional processes in the treatment of depression depends to a great extent on the existence of sensitive, continuous, and objectively codified measures of emotional expression. In this observational study, we used computerized measures to analyze high-resolution time-series facial expression data as well as self-reports to examine the association between emotional flexibility and depressive symptoms at the client as well as at the session levels. METHOD Video recordings from 283 therapy sessions of 58 clients who underwent 16 sessions of manualized psychodynamic psychotherapy for depression were analyzed. Data was collected as part of routine practice in a university clinic that provides treatments to the community. Emotional flexibility was measured in each session using an automated facial expression emotion recognition system. The clients' depression level was assessed at the beginning of each session using the Beck Depression Inventory-II (Beck et al., 1996). RESULTS Higher emotional flexibility was associated with lower depressive symptoms at the treatment as well as at the session levels. CONCLUSION These findings highlight the centrality of emotional flexibility both as a trait-like as well as a state-like characteristic of depression. The results also demonstrate the usefulness of computerized measures to capture key emotional processes in the treatment of depression at a high scale and specificity.
Collapse
Affiliation(s)
| | - Ido Yehezkel
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | - Adar Paz
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | - Eran Bar-Kalifa
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Maya Wolff
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | - Avinoam Dar
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | | |
Collapse
|
6
|
Han MM, Li XY, Yi XY, Zheng YS, Xia WL, Liu YF, Wang QX. Automatic recognition of depression based on audio and video: A review. World J Psychiatry 2024; 14:225-233. [PMID: 38464777 PMCID: PMC10921287 DOI: 10.5498/wjp.v14.i2.225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 12/18/2023] [Accepted: 01/24/2024] [Indexed: 02/06/2024] Open
Abstract
Depression is a common mental health disorder. With current depression detection methods, specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment. Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized. Specialized physicians usually require extensive training and experience to capture changes in these features. Advancements in deep learning technology have provided technical support for capturing non-biological markers. Several researchers have proposed automatic depression estimation (ADE) systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening. This article summarizes commonly used public datasets and recent research on audio- and video-based ADE based on three perspectives: Datasets, deficiencies in existing research, and future development directions.
Collapse
Affiliation(s)
- Meng-Meng Han
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
| | - Xing-Yun Li
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan 250353, Shandong Province, China
| | - Xin-Yu Yi
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Engineering Research Center of Big Data Applied Technology, Faculty of Computer Science and Technology, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, Shandong Province, China
- Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan 250353, Shandong Province, China
| | - Yun-Shao Zheng
- Department of Ward Two, Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| | - Wei-Li Xia
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| | - Ya-Fei Liu
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| | - Qing-Xiang Wang
- Shandong Mental Health Center, Shandong University, Jinan 250014, Shandong Province, China
| |
Collapse
|
7
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
8
|
Demchenko I, Desai N, Iwasa SN, Gholamali Nezhad F, Zariffa J, Kennedy SH, Rule NO, Cohn JF, Popovic MR, Mulsant BH, Bhat V. Manipulating facial musculature with functional electrical stimulation as an intervention for major depressive disorder: a focused search of literature for a proposal. J Neuroeng Rehabil 2023; 20:64. [PMID: 37193985 DOI: 10.1186/s12984-023-01187-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 05/02/2023] [Indexed: 05/18/2023] Open
Abstract
BACKGROUND Major Depressive Disorder (MDD) is associated with interoceptive deficits expressed throughout the body, particularly the facial musculature. According to the facial feedback hypothesis, afferent feedback from the facial muscles suffices to alter the emotional experience. Thus, manipulating the facial muscles could provide a new "mind-body" intervention for MDD. This article provides a conceptual overview of functional electrical stimulation (FES), a novel neuromodulation-based treatment modality that can be potentially used in the treatment of disorders of disrupted brain connectivity, such as MDD. METHODS A focused literature search was performed for clinical studies of FES as a modulatory treatment for mood symptoms. The literature is reviewed in a narrative format, integrating theories of emotion, facial expression, and MDD. RESULTS A rich body of literature on FES supports the notion that peripheral muscle manipulation in patients with stroke or spinal cord injury may enhance central neuroplasticity, restoring lost sensorimotor function. These neuroplastic effects suggest that FES may be a promising innovative intervention for psychiatric disorders of disrupted brain connectivity, such as MDD. Recent pilot data on repetitive FES applied to the facial muscles in healthy participants and patients with MDD show early promise, suggesting that FES may attenuate the negative interoceptive bias associated with MDD by enhancing positive facial feedback. Neurobiologically, the amygdala and nodes of the emotion-to-motor transformation loop may serve as potential neural targets for facial FES in MDD, as they integrate proprioceptive and interoceptive inputs from muscles of facial expression and fine-tune their motor output in line with socio-emotional context. CONCLUSIONS Manipulating facial muscles may represent a mechanistically novel treatment strategy for MDD and other disorders of disrupted brain connectivity that is worthy of investigation in phase II/III trials.
Collapse
Affiliation(s)
- Ilya Demchenko
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada
| | - Naaz Desai
- Krembil Research Institute - University Health Network, Toronto, ON, M5T 0S8, Canada
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
| | - Stephanie N Iwasa
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada
| | - Fatemeh Gholamali Nezhad
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada
| | - José Zariffa
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada
- Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5G 1V7, Canada
- Institute of Biomedical Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3E2, Canada
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3G8, Canada
| | - Sidney H Kennedy
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5T 1R8, Canada
| | - Nicholas O Rule
- Department of Psychology, Faculty of Arts & Science , University of Toronto, Toronto, ON, M5S 3G3, Canada
| | - Jeffrey F Cohn
- Department of Psychology, Kenneth P. Dietrich School of Arts & Sciences, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Milos R Popovic
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada
- Institute of Biomedical Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3E2, Canada
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3G8, Canada
| | - Benoit H Mulsant
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5T 1R8, Canada
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, ON, M6J 1H4, Canada
| | - Venkat Bhat
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada.
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada.
- Krembil Research Institute - University Health Network, Toronto, ON, M5T 0S8, Canada.
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada.
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada.
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5T 1R8, Canada.
| |
Collapse
|
9
|
Ettore E, Müller P, Hinze J, Benoit M, Giordana B, Postin D, Lecomte A, Lindsay H, Robert P, König A. Digital Phenotyping for Differential Diagnosis of Major Depressive Episode: Narrative Review. JMIR Ment Health 2023; 10:e37225. [PMID: 36689265 PMCID: PMC9903183 DOI: 10.2196/37225] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 09/02/2022] [Accepted: 09/30/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Major depressive episode (MDE) is a common clinical syndrome. It can be found in different pathologies such as major depressive disorder (MDD), bipolar disorder (BD), posttraumatic stress disorder (PTSD), or even occur in the context of psychological trauma. However, only 1 syndrome is described in international classifications (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition [DSM-5]/International Classification of Diseases 11th Revision [ICD-11]), which do not take into account the underlying pathology at the origin of the MDE. Clinical interviews are currently the best source of information to obtain the etiological diagnosis of MDE. Nevertheless, it does not allow an early diagnosis and there are no objective measures of extracted clinical information. To remedy this, the use of digital tools and their correlation with clinical symptomatology could be useful. OBJECTIVE We aimed to review the current application of digital tools for MDE diagnosis while highlighting shortcomings for further research. In addition, our work was focused on digital devices easy to use during clinical interview and mental health issues where depression is common. METHODS We conducted a narrative review of the use of digital tools during clinical interviews for MDE by searching papers published in PubMed/MEDLINE, Web of Science, and Google Scholar databases since February 2010. The search was conducted from June to September 2021. Potentially relevant papers were then compared against a checklist for relevance and reviewed independently for inclusion, with focus on 4 allocated topics of (1) automated voice analysis, behavior analysis by (2) video and physiological measures, (3) heart rate variability (HRV), and (4) electrodermal activity (EDA). For this purpose, we were interested in 4 frequently found clinical conditions in which MDE can occur: (1) MDD, (2) BD, (3) PTSD, and (4) psychological trauma. RESULTS A total of 74 relevant papers on the subject were qualitatively analyzed and the information was synthesized. Thus, a digital phenotype of MDE seems to emerge consisting of modifications in speech features (namely, temporal, prosodic, spectral, source, and formants) and in speech content, modifications in nonverbal behavior (head, hand, body and eyes movement, facial expressivity, and gaze), and a decrease in physiological measurements (HRV and EDA). We not only found similarities but also differences when MDE occurs in MDD, BD, PTSD, or psychological trauma. However, comparative studies were rare in BD or PTSD conditions, which does not allow us to identify clear and distinct digital phenotypes. CONCLUSIONS Our search identified markers from several modalities that hold promise for helping with a more objective diagnosis of MDE. To validate their potential, further longitudinal and prospective studies are needed.
Collapse
Affiliation(s)
- Eric Ettore
- Department of Psychiatry and Memory Clinic, University Hospital of Nice, Nice, France
| | - Philipp Müller
- Research Department Cognitive Assistants, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Saarbrücken, Germany
| | - Jonas Hinze
- Department of Psychiatry and Psychotherapy, Saarland University Medical Center, Hombourg, Germany
| | - Michel Benoit
- Department of Psychiatry, Hopital Pasteur, University Hospital of Nice, Nice, France
| | - Bruno Giordana
- Department of Psychiatry, Hopital Pasteur, University Hospital of Nice, Nice, France
| | - Danilo Postin
- Department of Psychiatry, School of Medicine and Health Sciences, Carl von Ossietzky University of Oldenburg, Bad Zwischenahn, Germany
| | - Amandine Lecomte
- Research Department Sémagramme Team, Institut national de recherche en informatique et en automatique, Nancy, France
| | - Hali Lindsay
- Research Department Cognitive Assistants, Deutsches Forschungszentrum für Künstliche Intelligenz GmbH, Saarbrücken, Germany
| | - Philippe Robert
- Research Department, Cognition-Behaviour-Technology Lab, University Côte d'Azur, Nice, France
| | - Alexandra König
- Research Department Stars Team, Institut national de recherche en informatique et en automatique, Sophia Antipolis - Valbonne, France
| |
Collapse
|
10
|
Bilalpur M, Hinduja S, Cariola LA, Sheeber LB, Allen N, Jeni LA, Morency LP, Cohn JF. Multimodal Feature Selection for Detecting Mothers' Depression in Dyadic Interactions with their Adolescent Offspring. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2023; 2023:1-8. [PMID: 39296877 PMCID: PMC11408746 DOI: 10.1109/fg57933.2023.10042796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
Depression is the most common psychological disorder, a leading cause of disability world-wide, and a major contributor to inter-generational transmission of psychopathology within families. To contribute to our understanding of depression within families and to inform modality selection and feature reduction, it is critical to identify interpretable features in developmentally appropriate contexts. Mothers with and without depression were studied. Depression was defined as history of treatment for depression and elevations in current or recent symptoms. We explored two multimodal feature selection strategies in dyadic interaction tasks of mothers with their adolescent children for depression detection. Modalities included face and head dynamics, facial action units, speech-related behavior, and verbal features. The initial feature space was vast and inter-correlated (collinear). To reduce dimensionality and gain insight into the relative contribution of each modality and feature, we explored feature selection strategies using Variance Inflation Factor (VIF) and Shapley values. On an average collinearity correction through VIF resulted in about 4 times feature reduction across unimodal and multimodal features. Collinearity correction was also found to be an optimal intermediate step prior to Shapley analysis. Shapley feature selection following VIF yielded best performance. The top 15 features obtained through Shapley achieved 78% accuracy. The most informative features came from all four modalities sampled, which supports the importance of multimodal feature selection.
Collapse
Affiliation(s)
- Maneesh Bilalpur
- Intelligent Systems Program, University of Pittsburgh, Pittsburgh, USA
| | - Saurabh Hinduja
- Department of Psychology, University of Pittsburgh, Pittsburgh, USA
| | - Laura A Cariola
- Clinical and Health Psychology, University of Edinburgh, Edinburgh, UK
| | | | - Nick Allen
- Department of Psychology, University of Oregon, USA
| | | | | | - Jeffrey F Cohn
- Intelligent Systems Program, University of Pittsburgh, Pittsburgh, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, USA
- Robotics Institute, Carnegie Mellon University, USA
| |
Collapse
|
11
|
Liu D, Liu B, Lin T, Liu G, Yang G, Qi D, Qiu Y, Lu Y, Yuan Q, Shuai SC, Li X, Liu O, Tang X, Shuai J, Cao Y, Lin H. Measuring depression severity based on facial expression and body movement using deep convolutional neural network. Front Psychiatry 2022; 13:1017064. [PMID: 36620657 PMCID: PMC9810804 DOI: 10.3389/fpsyt.2022.1017064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Real-time evaluations of the severity of depressive symptoms are of great significance for the diagnosis and treatment of patients with major depressive disorder (MDD). In clinical practice, the evaluation approaches are mainly based on psychological scales and doctor-patient interviews, which are time-consuming and labor-intensive. Also, the accuracy of results mainly depends on the subjective judgment of the clinician. With the development of artificial intelligence (AI) technology, more and more machine learning methods are used to diagnose depression by appearance characteristics. Most of the previous research focused on the study of single-modal data; however, in recent years, many studies have shown that multi-modal data has better prediction performance than single-modal data. This study aimed to develop a measurement of depression severity from expression and action features and to assess its validity among the patients with MDD. Methods We proposed a multi-modal deep convolutional neural network (CNN) to evaluate the severity of depressive symptoms in real-time, which was based on the detection of patients' facial expression and body movement from videos captured by ordinary cameras. We established behavioral depression degree (BDD) metrics, which combines expression entropy and action entropy to measure the depression severity of MDD patients. Results We found that the information extracted from different modes, when integrated in appropriate proportions, can significantly improve the accuracy of the evaluation, which has not been reported in previous studies. This method presented an over 74% Pearson similarity between BDD and self-rating depression scale (SDS), self-rating anxiety scale (SAS), and Hamilton depression scale (HAMD). In addition, we tracked and evaluated the changes of BDD in patients at different stages of a course of treatment and the results obtained were in agreement with the evaluation from the scales. Discussion The BDD can effectively measure the current state of patients' depression and its changing trend according to the patient's expression and action features. Our model may provide an automatic auxiliary tool for the diagnosis and treatment of MDD.
Collapse
Affiliation(s)
- Dongdong Liu
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
| | - Bowen Liu
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
- Department of Psychiatry, Baoan Mental Health Center, Shenzhen Baoan Center for Chronic Disease Control, Shenzhen, China
| | - Tao Lin
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou Key Laboratory of Biophysics, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, China
| | - Guangya Liu
- Integrated Chinese and Western Therapy of Depression Ward, Hunan Brain Hospital, Changsha, China
| | - Guoyu Yang
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
| | - Dezhen Qi
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
| | - Ye Qiu
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou Key Laboratory of Biophysics, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, China
| | - Yuer Lu
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou Key Laboratory of Biophysics, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, China
| | - Qinmei Yuan
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Stella C. Shuai
- Department of Biological Sciences, Northwestern University, Evanston, IL, United States
| | - Xiang Li
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
| | - Ou Liu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou Key Laboratory of Biophysics, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, China
| | - Xiangdong Tang
- Sleep Medicine Center, Mental Health Center, Department of Respiratory and Critical Care Medicine, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Jianwei Shuai
- Department of Physics, Fujian Provincial Key Laboratory for Soft Functional Materials Research, Xiamen University, Xiamen, China
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou Key Laboratory of Biophysics, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, China
- State Key Laboratory of Cellular Stress Biology, Innovation Center for Cell Signaling Network, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Yuping Cao
- Department of Psychiatry, National Clinical Research Center for Mental Disorders, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Hai Lin
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou Key Laboratory of Biophysics, Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou, Zhejiang, China
| |
Collapse
|
12
|
Hartmann TJ, Hartmann JBJ, Friebe-Hoffmann U, Lato C, Janni W, Lato K. Novel Method for Three-Dimensional Facial Expression Recognition Using Self-Normalizing Neural Networks and Mobile Devices. Geburtshilfe Frauenheilkd 2022; 82:955-969. [PMID: 36110895 PMCID: PMC9470291 DOI: 10.1055/a-1866-2943] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/26/2022] [Indexed: 11/25/2022] Open
Abstract
Introduction To date, most ways to perform facial expression recognition rely on two-dimensional images, advanced approaches with three-dimensional data exist. These however demand stationary apparatuses and thus lack portability and possibilities to scale deployment. As human emotions, intent and even diseases may condense in distinct facial expressions or changes therein, the need for a portable yet capable solution is signified. Due to the superior informative value of three-dimensional data on facial morphology and because certain syndromes find expression in specific facial dysmorphisms, a solution should allow portable acquisition of true three-dimensional facial scans in real time. In this study we present a novel solution for the three-dimensional acquisition of facial geometry data and the recognition of facial expressions from it. The new technology presented here only requires the use of a smartphone or tablet with an integrated TrueDepth camera and enables real-time acquisition of the geometry and its categorization into distinct facial expressions. Material and Methods Our approach consisted of two parts: First, training data was acquired by asking a collective of 226 medical students to adopt defined facial expressions while their current facial morphology was captured by our specially developed app running on iPads, placed in front of the students. In total, the list of the facial expressions to be shown by the participants consisted of "disappointed", "stressed", "happy", "sad" and "surprised". Second, the data were used to train a self-normalizing neural network. A set of all factors describing the current facial expression at a time is referred to as "snapshot". Results In total, over half a million snapshots were recorded in the study. Ultimately, the network achieved an overall accuracy of 80.54% after 400 epochs of training. In test, an overall accuracy of 81.15% was determined. Recall values differed by the category of a snapshot and ranged from 74.79% for "stressed" to 87.61% for "happy". Precision showed similar results, whereas "sad" achieved the lowest value at 77.48% and "surprised" the highest at 86.87%. Conclusions With the present work it can be demonstrated that respectable results can be achieved even when using data sets with some challenges. Through various measures, already incorporated into an optimized version of our app, it is to be expected that the training results can be significantly improved and made more precise in the future. Currently a follow-up study with the new version of our app that encompasses the suggested alterations and adaptions, is being conducted. We aim to build a large and open database of facial scans not only for facial expression recognition but to perform disease recognition and to monitor diseases' treatment progresses.
Collapse
Affiliation(s)
- Tim Johannes Hartmann
- Universitäts-Hautklinik Tübingen, Tübingen, Germany
- Universitätsfrauenklinik Ulm, Ulm, Germany
| | | | | | | | | | | |
Collapse
|
13
|
Dobreva D, Gkantidis N, Halazonetis D, Verna C, Kanavakis G. Smile Reproducibility and Its Relationship to Self-Perceived Smile Attractiveness. BIOLOGY 2022; 11:biology11050719. [PMID: 35625447 PMCID: PMC9138875 DOI: 10.3390/biology11050719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/04/2022] [Accepted: 05/05/2022] [Indexed: 11/16/2022]
Abstract
The reproducibility of facial expressions has been previously explored, however, there is no detailed information regarding the reproducibility of lip morphology forming a social smile. In this study, we recruited 93 young adults, aged 21−35 years old, who agreed to participate in two consecutive study visits four weeks apart. On each visit, they were asked to perform a social smile, which was captured on a 3D facial image acquired using the 3dMD camera system. Assessments of self-perceived smile attractiveness were also performed using a VAS scale. Lip morphology, including smile shape, was described using 62 landmarks and semi-landmarks. A Procrustes superimposition of each set of smiling configurations (first and second visit) was performed and the Euclidean distance between each landmark set was calculated. A linear regression model was used to test the association between smile consistency and self-perceived smile attractiveness. The results show that the average landmark distance between sessions did not exceed 1.5 mm, indicating high repeatability, and that females presented approximately 15% higher smile consistecy than males (p < 0.05). There was no statistically significant association between smile consistency and self-perceived smile attractiveness (η2 = 0.015; p = 0.252), when controlling for the effect of sex and age.
Collapse
Affiliation(s)
- Denitsa Dobreva
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
| | - Nikolaos Gkantidis
- Department of Orthodontics and Dentofacial Orthopedics, University of Bern, 3001 Bern, Switzerland;
| | - Demetrios Halazonetis
- Department of Orthodontics, School of Dentistry, National and Kapodistrian University of Athens, GR-11527 Athens, Greece;
| | - Carlalberta Verna
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
| | - Georgios Kanavakis
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
- Department of Orthodontics, Tufts University School of Dental Medicine, Boston, MA 02111, USA
- Correspondence:
| |
Collapse
|
14
|
Thati RP, Dhadwal AS, Kumar P, P S. A novel multi-modal depression detection approach based on mobile crowd sensing and task-based mechanisms. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:4787-4820. [PMID: 35431608 PMCID: PMC9000000 DOI: 10.1007/s11042-022-12315-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/20/2021] [Accepted: 01/17/2022] [Indexed: 05/05/2023]
Abstract
Depression has become a global concern, and COVID-19 also has caused a big surge in its incidence. Broadly, there are two primary methods of detecting depression: Task-based and Mobile Crowd Sensing (MCS) based methods. These two approaches, when integrated, can complement each other. This paper proposes a novel approach for depression detection that combines real-time MCS and task-based mechanisms. We aim to design an end-to-end machine learning pipeline, which involves multimodal data collection, feature extraction, feature selection, fusion, and classification to distinguish between depressed and non-depressed subjects. For this purpose, we created a real-world dataset of depressed and non-depressed subjects. We experimented with: various features from multi-modalities, feature selection techniques, fused features, and machine learning classifiers such as Logistic Regression, Support Vector Machines (SVM), etc. for classification. Our findings suggest that combining features from multiple modalities perform better than any single data modality, and the best classification accuracy is achieved when features from all three data modalities are fused. Feature selection method based on Pearson's correlation coefficients improved the accuracy in comparison with other methods. Also, SVM yielded the best accuracy of 86%. Our proposed approach was also applied on benchmarking dataset, and results demonstrated that the multimodal approach is advantageous in performance with state-of-the-art depression recognition techniques.
Collapse
Affiliation(s)
- Ravi Prasad Thati
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440010 Maharashtra India
| | - Abhishek Singh Dhadwal
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440010 Maharashtra India
| | - Praveen Kumar
- Department of Computer Science and Engineering, Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440010 Maharashtra India
| | - Sainaba P
- Department of Applied Psychology, Central University of Tamil Nadu, Tamilnadu, India
| |
Collapse
|
15
|
Schultebraucks K, Yadav V, Shalev AY, Bonanno GA, Galatzer-Levy IR. Deep learning-based classification of posttraumatic stress disorder and depression following trauma utilizing visual and auditory markers of arousal and mood. Psychol Med 2022; 52:957-967. [PMID: 32744201 DOI: 10.1017/s0033291720002718] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
BACKGROUND Visual and auditory signs of patient functioning have long been used for clinical diagnosis, treatment selection, and prognosis. Direct measurement and quantification of these signals can aim to improve the consistency, sensitivity, and scalability of clinical assessment. Currently, we investigate if machine learning-based computer vision (CV), semantic, and acoustic analysis can capture clinical features from free speech responses to a brief interview 1 month post-trauma that accurately classify major depressive disorder (MDD) and posttraumatic stress disorder (PTSD). METHODS N = 81 patients admitted to an emergency department (ED) of a Level-1 Trauma Unit following a life-threatening traumatic event participated in an open-ended qualitative interview with a para-professional about their experience 1 month following admission. A deep neural network was utilized to extract facial features of emotion and their intensity, movement parameters, speech prosody, and natural language content. These features were utilized as inputs to classify PTSD and MDD cross-sectionally. RESULTS Both video- and audio-based markers contributed to good discriminatory classification accuracy. The algorithm discriminates PTSD status at 1 month after ED admission with an AUC of 0.90 (weighted average precision = 0.83, recall = 0.84, and f1-score = 0.83) as well as depression status at 1 month after ED admission with an AUC of 0.86 (weighted average precision = 0.83, recall = 0.82, and f1-score = 0.82). CONCLUSIONS Direct clinical observation during post-trauma free speech using deep learning identifies digital markers that can be utilized to classify MDD and PTSD status.
Collapse
Affiliation(s)
- Katharina Schultebraucks
- Department of Emergency Medicine, Vagelos School of Physicians and Surgeons, Columbia University Irving Medical Center, New York, New York, USA
- Department of Psychiatry, New York University Grossman School of Medicine, New York, New York, USA
- Data Science Institute, Columbia University, New York, New York, USA
| | | | - Arieh Y Shalev
- Department of Psychiatry, New York University Grossman School of Medicine, New York, New York, USA
| | - George A Bonanno
- Department of Counseling and Clinical Psychology, Teachers College, Columbia University, New York, New York, USA
| | - Isaac R Galatzer-Levy
- Department of Psychiatry, New York University Grossman School of Medicine, New York, New York, USA
- AiCure, New York, New York, USA
| |
Collapse
|
16
|
Akinci E, Wieser MO, Vanscheidt S, Diop S, Flasbeck V, Akinci B, Stiller C, Juckel G, Mavrogiorgou P. Impairments of Social Interaction in Depressive Disorder. Psychiatry Investig 2022; 19:178-189. [PMID: 35196828 PMCID: PMC8958205 DOI: 10.30773/pi.2021.0289] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 11/07/2021] [Indexed: 11/27/2022] Open
Abstract
OBJECTIVE Despite the numerous findings on the altered emotion recognition and dysfunctional social interaction behavior of depressive patients, a lot of the relationships are not clearly clarified. METHODS In this pilot study, 20 depressive patients (mean±SD, 38.4±14.2) and 20 healthy subjects (mean±SD, 38.9±15.3) (each in dyads) were videographed. We then analyzed their social interaction behavior and emotion processing in terms of emotion recognition, their own emotional experience, and the expression of emotions under the conditions of a semi-structured experimental paradigm. RESULTS Patients showed more significant impairment regarding the dimensions of social interaction behavior (i.e., attention, interest, and activity) and their interaction behavior was characterized by neutral affectivity, silence, and avoidance of direct eye contact. This interactive behavioral style was statistically related to depressive psychopathology. There were no differences concerning emotion recognition. CONCLUSION Impairments of non-verbal and verbal social interaction behavior of depressive patients seem to be less associated with disturbances of basic skills of emotion recognition.
Collapse
Affiliation(s)
- Erhan Akinci
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Max-Oskar Wieser
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Simon Vanscheidt
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Shirin Diop
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Vera Flasbeck
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Burhan Akinci
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Cora Stiller
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Georg Juckel
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| | - Paraskevi Mavrogiorgou
- Department of Psychiatry, Ruhr-University Bochum, LWL-University Hospital, Bochum, Germany
| |
Collapse
|
17
|
Terhürne P, Schwartz B, Baur T, Schiller D, Eberhardt ST, André E, Lutz W. Validation and application of the Non-Verbal Behavior Analyzer: An automated tool to assess non-verbal emotional expressions in psychotherapy. Front Psychiatry 2022; 13:1026015. [PMID: 36386975 PMCID: PMC9650367 DOI: 10.3389/fpsyt.2022.1026015] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 10/12/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Emotions play a key role in psychotherapy. However, a problem with examining emotional states via self-report questionnaires is that the assessment usually takes place after the actual emotion has been experienced which might lead to biases and continuous human ratings are time and cost intensive. Using the AI-based software package Non-Verbal Behavior Analyzer (NOVA), video-based emotion recognition of arousal and valence can be applied in naturalistic psychotherapeutic settings. In this study, four emotion recognition models (ERM) each based on specific feature sets (facial: OpenFace, OpenFace-Aureg; body: OpenPose-Activation, OpenPose-Energy) were developed and compared in their ability to predict arousal and valence scores correlated to PANAS emotion scores and processes of change (interpersonal experience, coping experience, affective experience) as well as symptoms (depression and anxiety in HSCL-11). MATERIALS AND METHODS A total of 183 patient therapy videos were divided into a training sample (55 patients), a test sample (50 patients), and a holdout sample (78 patients). The best ERM was selected for further analyses. Then, ERM based arousal and valence scores were correlated with patient and therapist estimates of emotions and processes of change. Furthermore, using regression models arousal and valence were examined as predictors of symptom severity in depression and anxiety. RESULTS The ERM based on OpenFace produced the best agreement to the human coder rating. Arousal and valence correlated significantly with therapists' ratings of sadness, shame, anxiety, and relaxation, but not with the patient ratings of their own emotions. Furthermore, a significant negative correlation indicates that negative valence was associated with higher affective experience. Negative valence was found to significantly predict higher anxiety but not depression scores. CONCLUSION This study shows that emotion recognition with NOVA can be used to generate ERMs associated with patient emotions, affective experiences and symptoms. Nevertheless, limitations were obvious. It seems necessary to improve the ERMs using larger databases of sessions and the validity of ERMs needs to be further investigated in different samples and different applications. Furthermore, future research should take ERMs to identify emotional synchrony between patient and therapists into account.
Collapse
Affiliation(s)
- Patrick Terhürne
- Clinical Psychology and Psychotherapy, University of Trier, Trier, Germany
| | - Brian Schwartz
- Clinical Psychology and Psychotherapy, University of Trier, Trier, Germany
| | - Tobias Baur
- Chair for Human Centered Artificial Intelligence, Augsburg University, Augsburg, Germany
| | - Dominik Schiller
- Chair for Human Centered Artificial Intelligence, Augsburg University, Augsburg, Germany
| | | | - Elisabeth André
- Chair for Human Centered Artificial Intelligence, Augsburg University, Augsburg, Germany
| | - Wolfgang Lutz
- Clinical Psychology and Psychotherapy, University of Trier, Trier, Germany
| |
Collapse
|
18
|
Tadalagi M, Joshi AM. AutoDep: automatic depression detection using facial expressions based on linear binary pattern descriptor. Med Biol Eng Comput 2021; 59:1339-1354. [PMID: 34091864 DOI: 10.1007/s11517-021-02358-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 03/26/2021] [Indexed: 11/25/2022]
Abstract
The psychological health of a person plays an important role in their daily life activities. The paper addresses depression issues with the machine learning model using facial expressions of the patient. Some research has already been done on visual based on depression detection methods, but those are illumination variant. The paper uses feature extraction using LBP (Local Binary Pattern) descriptor, which is illumination invariant. The Viola-Jones algorithm is used for face detection and SVM (support vector machine) is considered for classification along with the LBP descriptor to make a complete model for depression level detection. The proposed method captures frontal face from the videos of subjects and their facial features are extracted from each frame. Subsequently, the facial features are analyzed to detect depression levels with the post-processing model. The performance of the proposed system is evaluated using machine learning algorithms in MATLAB. For the real-time system design, it is necessary to test it on the hardware platform. The LBP descriptor has been implemented on FPGA using Xilinx VIVADO 16.4. The results of the proposed method show satisfactory performance and accuracy for depression detection comparison with similar previous work.
Collapse
Affiliation(s)
| | - Amit M Joshi
- Malaviya National Institute of Technology, Jaipur, India.
| |
Collapse
|
19
|
Galatzer-Levy I, Abbas A, Ries A, Homan S, Sels L, Koesmahargyo V, Yadav V, Colla M, Scheerer H, Vetter S, Seifritz E, Scholz U, Kleim B. Validation of Visual and Auditory Digital Markers of Suicidality in Acutely Suicidal Psychiatric Inpatients: Proof-of-Concept Study. J Med Internet Res 2021; 23:e25199. [PMID: 34081022 PMCID: PMC8212625 DOI: 10.2196/25199] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 12/15/2020] [Accepted: 03/16/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Multiple symptoms of suicide risk have been assessed based on visual and auditory information, including flattened affect, reduced movement, and slowed speech. Objective quantification of such symptomatology from novel data sources can increase the sensitivity, scalability, and timeliness of suicide risk assessment. OBJECTIVE We aimed to examine measurements extracted from video interviews using open-source deep learning algorithms to quantify facial, vocal, and movement behaviors in relation to suicide risk severity in recently admitted patients following a suicide attempt. METHODS We utilized video to quantify facial, vocal, and movement markers associated with mood, emotion, and motor functioning from a structured clinical conversation in 20 patients admitted to a psychiatric hospital following a suicide risk attempt. Measures were calculated using open-source deep learning algorithms for processing facial expressivity, head movement, and vocal characteristics. Derived digital measures of flattened affect, reduced movement, and slowed speech were compared to suicide risk with the Beck Scale for Suicide Ideation controlling for age and sex, using multiple linear regression. RESULTS Suicide severity was associated with multiple visual and auditory markers, including speech prevalence (β=-0.68, P=.02, r2=0.40), overall expressivity (β=-0.46, P=.10, r2=0.27), and head movement measured as head pitch variability (β=-1.24, P=.006, r2=0.48) and head yaw variability (β=-0.54, P=.06, r2=0.32). CONCLUSIONS Digital measurements of facial affect, movement, and speech prevalence demonstrated strong effect sizes and linear associations with the severity of suicidal ideation.
Collapse
Affiliation(s)
- Isaac Galatzer-Levy
- Research and Development, AiCure, New York, NY, United States
- Psychiatry, New York University School of Medicine, New York, NY, United States
| | - Anzar Abbas
- Research and Development, AiCure, New York, NY, United States
| | - Anja Ries
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich, Zurich, Switzerland
| | - Stephanie Homan
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich, Zurich, Switzerland
| | - Laura Sels
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
| | | | - Vijay Yadav
- Research and Development, AiCure, New York, NY, United States
| | - Michael Colla
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
| | - Hanne Scheerer
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
| | - Stefan Vetter
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
| | - Erich Seifritz
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich, Zurich, Switzerland
| | - Urte Scholz
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
| | - Birgit Kleim
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich, Zurich, Switzerland
| |
Collapse
|
20
|
Onie S, Li X, Liang M, Sowmya A, Larsen ME. The Use of Closed-Circuit Television and Video in Suicide Prevention: Narrative Review and Future Directions. JMIR Ment Health 2021; 8:e27663. [PMID: 33960952 PMCID: PMC8140380 DOI: 10.2196/27663] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/17/2021] [Accepted: 03/17/2021] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Suicide is a recognized public health issue, with approximately 800,000 people dying by suicide each year. Among the different technologies used in suicide research, closed-circuit television (CCTV) and video have been used for a wide array of applications, including assessing crisis behaviors at metro stations, and using computer vision to identify a suicide attempt in progress. However, there has been no review of suicide research and interventions using CCTV and video. OBJECTIVE The objective of this study was to review the literature to understand how CCTV and video data have been used in understanding and preventing suicide. Furthermore, to more fully capture progress in the field, we report on an ongoing study to respond to an identified gap in the narrative review, by using a computer vision-based system to identify behaviors prior to a suicide attempt. METHODS We conducted a search using the keywords "suicide," "cctv," and "video" on PubMed, Inspec, and Web of Science. We included any studies which used CCTV or video footage to understand or prevent suicide. If a study fell into our area of interest, we included it regardless of the quality as our goal was to understand the scope of how CCTV and video had been used rather than quantify any specific effect size, but we noted the shortcomings in their design and analyses when discussing the studies. RESULTS The review found that CCTV and video have primarily been used in 3 ways: (1) to identify risk factors for suicide (eg, inferring depression from facial expressions), (2) understanding suicide after an attempt (eg, forensic applications), and (3) as part of an intervention (eg, using computer vision and automated systems to identify if a suicide attempt is in progress). Furthermore, work in progress demonstrates how we can identify behaviors prior to an attempt at a hotspot, an important gap identified by papers in the literature. CONCLUSIONS Thus far, CCTV and video have been used in a wide array of applications, most notably in designing automated detection systems, with the field heading toward an automated detection system for early intervention. Despite many challenges, we show promising progress in developing an automated detection system for preattempt behaviors, which may allow for early intervention.
Collapse
Affiliation(s)
- Sandersan Onie
- Black Dog Institute, University of New South Wales, Sydney, Sydney, Australia
| | - Xun Li
- School of Computer Science and Engineering, University of New South Wales, Sydney, Sydney, Australia
| | - Morgan Liang
- School of Computer Science and Engineering, University of New South Wales, Sydney, Sydney, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, Sydney, Australia
| | - Mark Erik Larsen
- Black Dog Institute, University of New South Wales, Sydney, Sydney, Australia
| |
Collapse
|
21
|
Abbas A, Sauder C, Yadav V, Koesmahargyo V, Aghjayan A, Marecki S, Evans M, Galatzer-Levy IR. Remote Digital Measurement of Facial and Vocal Markers of Major Depressive Disorder Severity and Treatment Response: A Pilot Study. Front Digit Health 2021; 3:610006. [PMID: 34713091 PMCID: PMC8521884 DOI: 10.3389/fdgth.2021.610006] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 02/19/2021] [Indexed: 12/21/2022] Open
Abstract
Objectives: Multiple machine learning-based visual and auditory digital markers have demonstrated associations between major depressive disorder (MDD) status and severity. The current study examines if such measurements can quantify response to antidepressant treatment (ADT) with selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine uptake inhibitors (SNRIs). Methods: Visual and auditory markers were acquired through an automated smartphone task that measures facial, vocal, and head movement characteristics across 4 weeks of treatment (with time points at baseline, 2 weeks, and 4 weeks) on ADT (n = 18). MDD diagnosis was confirmed using the Mini-International Neuropsychiatric Interview (MINI), and the Montgomery-Åsberg Depression Rating Scale (MADRS) was collected concordantly to assess changes in MDD severity. Results: Patient responses to ADT demonstrated clinically and statistically significant changes in the MADRS [F (2, 34) = 51.62, p < 0.0001]. Additionally, patients demonstrated significant increases in multiple digital markers including facial expressivity, head movement, and amount of speech. Finally, patients demonstrated significantly decreased frequency of fear and anger facial expressions. Conclusion: Digital markers associated with MDD demonstrate validity as measures of treatment response.
Collapse
Affiliation(s)
| | - Colin Sauder
- Adams Clinical, Watertown, MA, United States
- Karuna Therapeutics, Boston, MA, United States
| | | | | | | | | | | | - Isaac R. Galatzer-Levy
- AiCure, New York, NY, United States
- Psychiatry, New York University School of Medicine, New York, NY, United States
| |
Collapse
|
22
|
What You Say or How You Say It? Depression Detection Through Joint Modeling of Linguistic and Acoustic Aspects of Speech. Cognit Comput 2021. [DOI: 10.1007/s12559-020-09808-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractDepression is one of the most common mental health issues. (It affects more than 4% of the world’s population, according to recent estimates.) This article shows that the joint analysis of linguistic and acoustic aspects of speech allows one to discriminate between depressed and nondepressed speakers with an accuracy above 80%. The approach used in the work is based on networks designed for sequence modeling (bidirectional Long-Short Term Memory networks) and multimodal analysis methodologies (late fusion, joint representation and gated multimodal units). The experiments were performed over a corpus of 59 interviews (roughly 4 hours of material) involving 29 individuals diagnosed with depression and 30 control participants. In addition to an accuracy of 80%, the results show that multimodal approaches perform better than unimodal ones owing to people’s tendency to manifest their condition through one modality only, a source of diversity across unimodal approaches. In addition, the experiments show that it is possible to measure the “confidence” of the approach and automatically identify a subset of the test data in which the performance is above a predefined threshold. It is possible to effectively detect depression by using unobtrusive and inexpensive technologies based on the automatic analysis of speech and language.
Collapse
|
23
|
Milenkovic I, Bartova L, Papageorgiou K, Kasper S, Traub-Weidinger T, Winkler D. Case Report: Bupropion Reduces the [ 123I]FP-CIT Binding to Striatal Dopamine Transporter. Front Psychiatry 2021; 12:631357. [PMID: 33692710 PMCID: PMC7937912 DOI: 10.3389/fpsyt.2021.631357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 01/29/2021] [Indexed: 11/13/2022] Open
Abstract
The diagnosis of parkinsonian syndromes in patients with severe depression may be challenging due to overlapping clinical phenomena, especially regarding psychomotor and affective symptoms. [123I]FP-CIT-SPECT is a useful method to detect degenerative parkinsonian disorders. However, some drugs may influence the tracer binding and thus alter the result. We present a case of 56-year-old female inpatient with difficult-to-treat late-onset depression. Since the current major depressive episode (MDE) was accompanied by psychotic features including delusions and hallucinations as well as hypokinesia, stooped posture and hypomimia, underlying degenerative parkinsonism was suspected. The pathologic [123I]FP-CIT-SPECT scan under ongoing antidepressant therapy with bupropion 300 mg/die (serum level of bupropion 43 ng/ml and hydroxybupropion 2,332 ng/ml) showed reduced [123I]FP-CIT binding throughout the striatum. The scan normalized upon a wash-out phase of four half-time periods (serum level of bupropion was 0.4 ng/ml and for hydroxybupropion 80.5 ng/ml). Our report should serve as a cautionary note for use of [123I]FP-CIT in depressed patients, particularly in those treated with drugs interfering with the dopamine transporter. Furthermore, our case argues for a need of consultation of a movement disorder specialist prior to dopamine transporter imaging.
Collapse
Affiliation(s)
- Ivan Milenkovic
- Department of Neurology, Medical University of Vienna, Vienna, Austria.,Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria
| | - Lucie Bartova
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria
| | | | - Siegfried Kasper
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria
| | - Tatjana Traub-Weidinger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Dietmar Winkler
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
24
|
Van de Velde N, Kappen M, Koster EHW, Hoorelbeke K, Tandt H, Verslype P, Baeken C, De Raedt R, Lemmens G, Vanderhasselt MA. Cognitive remediation following electroconvulsive therapy in patients with treatment resistant depression: randomized controlled trail of an intervention for relapse prevention - study protocol. BMC Psychiatry 2020; 20:453. [PMID: 32938410 PMCID: PMC7493867 DOI: 10.1186/s12888-020-02856-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 09/03/2020] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Major depressive episode (MDE) is worldwide one of the most prevalent and disabling mental health conditions. In cases of persistent non-response to treatment, electroconvulsive therapy (ECT) is a safe and effective treatment strategy with high response rates. Unfortunately, longitudinal data show low sustained response rates with 6-month relapse rates as high as 50% using existing relapse prevention strategies. Cognitive side effects of ECT, even though transient, might trigger mechanisms that increase relapse in patients who initially responded to ECT. Among these side effects, reduced cognitive control is an important neurobiological driven vulnerability factor for depression. As such, cognitive control training (CCT) holds promise as a non-pharmacological strategy to improve long-term effects of ECT (i.e., increase remission, and reduce depression relapse). METHOD/DESIGN Eighty-eight patients aged between 18 and 70 years with MDE who start CCT will be included in this randomized controlled trial (RCT). Following (partial) response to ECT treatment (at least a 25% reduction of clinical symptoms), patients will be randomly assigned to a computer based CCT or active placebo control. A first aim of this RCT is to assess the effects of CCT compared to an active placebo condition on depression symptomatology, cognitive complaints, and quality of life. Secondly, we will monitor patients every 2 weeks for a period of 6 months following CCT/active placebo, allowing the detection of potential relapse of depression. Thirdly, we will assess patient evaluation of the addition of cognitive remediation to ECT using qualitative interview methods (satisfaction, acceptability and appropriateness). Finally, in order to further advance our understanding of the mechanisms underlying effects of CCT, exploratory analyses will be conducted using video footage collected during the CCT/active control phase of the study. DISCUSSION Cognitive remediation will be performed following response to ECT, and an extensive follow-up period will be employed. Positive findings would not only benefit patients by decreasing relapse, but also by increasing acceptability of ECT, reducing the burden of cognitive side-effects. TRIAL REGISTRATION The study is registered with ClinicalTrials.gov . Study ID: NCT04383509 Trial registration date: 12.05.2020.
Collapse
Affiliation(s)
- Nele Van de Velde
- Department of Psychiatry, Ghent University Hospital, C. Heymanslaan 10, 9000, Ghent, Belgium.
| | - Mitchel Kappen
- grid.5342.00000 0001 2069 7798Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium ,grid.5342.00000 0001 2069 7798Ghent Experimental Psychiatry (GHEP) lab, Ghent University, Ghent, Belgium
| | - Ernst H. W. Koster
- grid.5342.00000 0001 2069 7798Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Kristof Hoorelbeke
- grid.5342.00000 0001 2069 7798Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Hannelore Tandt
- grid.410566.00000 0004 0626 3303Department of Psychiatry, Ghent University Hospital, C. Heymanslaan 10, 9000 Ghent, Belgium
| | - Pieter Verslype
- grid.410566.00000 0004 0626 3303Department of Anesthesiology, Ghent University Hospital, Ghent, Belgium
| | - Chris Baeken
- grid.410566.00000 0004 0626 3303Department of Psychiatry, Ghent University Hospital, C. Heymanslaan 10, 9000 Ghent, Belgium ,grid.5342.00000 0001 2069 7798Ghent Experimental Psychiatry (GHEP) lab, Ghent University, Ghent, Belgium ,grid.411326.30000 0004 0626 3362Department of Psychiatry, University Hospital (UZBrussel), Brussels, Belgium ,grid.6852.90000 0004 0398 8763Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Rudi De Raedt
- grid.5342.00000 0001 2069 7798Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium
| | - Gilbert Lemmens
- grid.410566.00000 0004 0626 3303Department of Psychiatry, Ghent University Hospital, C. Heymanslaan 10, 9000 Ghent, Belgium
| | - Marie-Anne Vanderhasselt
- grid.5342.00000 0001 2069 7798Department of Experimental Clinical and Health Psychology, Ghent University, Ghent, Belgium ,grid.5342.00000 0001 2069 7798Ghent Experimental Psychiatry (GHEP) lab, Ghent University, Ghent, Belgium
| |
Collapse
|
25
|
Gupta T, Haase CM, Strauss GP, Cohen AS, Ricard JR, Mittal VA. Alterations in facial expressions of emotion: Determining the promise of ultrathin slicing approaches and comparing human and automated coding methods in psychosis risk. Emotion 2020; 22:714-724. [PMID: 32584067 DOI: 10.1037/emo0000819] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Alterations in facial expressions of emotion are a hallmark of psychopathology and may be present before the onset of mental illness. Technological advances have spurred interest in examining alterations based on "thin slices" of behavior using automated approaches. However, questions remain. First, can alterations be detected in ultrathin slices of behavior? Second, how do automated approaches converge with human coding techniques? The present study examined ultrathin (i.e., 1-min) slices of video-recorded clinical interviews of 42 individuals at clinical high risk (CHR) for psychosis and 42 matched controls. Facial expressions of emotion (e.g., joy, anger) were examined using two automated facial analysis programs and coded by trained human raters (using the Expressive Emotional Behavior Coding System). Results showed that ultrathin (i.e., 1-min) slices of behavior were sufficient to reveal alterations in facial expressions of emotion, specifically blunted joy expressions in individuals at CHR (with supplementary analyses probing links with attenuated positive symptoms and functioning). Furthermore, both automated analysis programs converged in the ability to detect blunted joy expressions and were consistent with human coding at the level of both second-by-second and aggregate data. Finally, there were areas of divergence across approaches for other emotional expressions beyond joy. These data suggest that ultrathin slices of behavior can yield clues about emotional dysfunction. Further, automated approaches (which do not require lengthy training and coder time but do lend well to mobile assessment and computational modeling) show promise, but careful evaluation of convergence with human coding is needed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
26
|
Park S, Lee K, Lim JA, Ko H, Kim T, Lee JI, Kim H, Han SJ, Kim JS, Park S, Lee JY, Lee EC. Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1199. [PMID: 32098261 PMCID: PMC7070510 DOI: 10.3390/s20041199] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/20/2020] [Accepted: 02/20/2020] [Indexed: 12/05/2022]
Abstract
Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region.
Collapse
Affiliation(s)
- Seho Park
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea; (S.P.); (H.K.)
- Dental Research Institute, Seoul National University, School of Dentistry, Seoul 08826, Korea
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Kunyoung Lee
- Department of Computer Science, Sangmyung University, Seoul 03016, Korea;
| | - Jae-A Lim
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Hyunwoong Ko
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea; (S.P.); (H.K.)
- Dental Research Institute, Seoul National University, School of Dentistry, Seoul 08826, Korea
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Taehoon Kim
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Jung-In Lee
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Hakrim Kim
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Seong-Jae Han
- Seoul National University College of Medicine, Seoul 03080, Korea; (T.K.); (H.K.); (J.-I.L.)
| | - Jeong-Shim Kim
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Soowon Park
- Department of Education, Sejong University, Seoul 05006, Korea;
| | - Jun-Young Lee
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 03080, Korea
| | - Eui Chul Lee
- Department of Human Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Korea
| |
Collapse
|
27
|
Bhatia S, Goecke R, Hammal Z, Cohn JF. Automated Measurement of Head Movement Synchrony during Dyadic Depression Severity Interviews. PROCEEDINGS OF THE ... INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION. IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION 2019; 2019. [PMID: 31745390 DOI: 10.1109/fg.2019.8756509] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With few exceptions, most research in automated assessment of depression has considered only the patient's behavior to the exclusion of the therapist's behavior. We investigated the interpersonal coordination (synchrony) of head movement during patient-therapist clinical interviews. Our sample consisted of patients diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at 7-week intervals over a period of 21 weeks. For each session, patient and therapist 3D head movement was tracked from 2D videos. Head angles in the horizontal (pitch) and vertical (yaw) axes were used to measure head movement. Interpersonal coordination of head movement between patients and therapists was measured using windowed cross-correlation. Patterns of coordination in head movement were investigated using the peak picking algorithm. Changes in head movement coordination over the course of treatment were measured using a hierarchical linear model (HLM). The results indicated a strong effect for patient-therapist head movement synchrony. Within-dyad variability in head movement coordination was found to be higher than between-dyad variability, meaning that differences over time in a dyad were higher as compared to the differences between dyads. Head movement synchrony did not change over the course of treatment with change in depression severity. To the best of our knowledge, this study is the first attempt to analyze the mutual influence of patient-therapist head movement in relation to depression severity.
Collapse
Affiliation(s)
- Shalini Bhatia
- Human-Centred Technology Research Centre, University of Canberra, Canberra, Australia
| | - Roland Goecke
- Human-Centred Technology Research Centre, University of Canberra, Canberra, Australia
| | - Zakia Hammal
- Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
| | - Jeffrey F Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, USA
| |
Collapse
|
28
|
Pearlstein SL, Taylor CT, Stein MB. Facial Affect and Interpersonal Affiliation: Displays of Emotion During Relationship Formation in Social Anxiety Disorder. Clin Psychol Sci 2019; 7:826-839. [PMID: 31565542 DOI: 10.1177/2167702619825857] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Social anxiety disorder (SAD) often involves difficulty developing relationships. Facial expressions are important in relationship formation, but data are limited regarding facial display production among persons with SAD during social interaction. The current study compared facial displays of individuals diagnosed with SAD (n=41) to control participants (n=24) as they interacted with a confederate; confederates and observers then rated their desire for future interaction with participants. Automated software used the Facial Action Coding System (FACS; Ekman & Friesen, 1978) to classify displays. During portions of the interaction that involved listening to partners, the SAD group smiled less frequently and less intensely than controls, and lower smiling was associated with others' lower desire for future interaction with participants. Diminished positive facial affect in response to interaction partners may disrupt relationship formation in SAD and may serve as an effective treatment target.
Collapse
Affiliation(s)
- Sarah L Pearlstein
- San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology
| | - Charles T Taylor
- San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology.,University of California, San Diego
| | - Murray B Stein
- San Diego State University/University of California, San Diego Joint Doctoral Program in Clinical Psychology.,University of California, San Diego
| |
Collapse
|
29
|
Gavrilescu M, Vizireanu N. Predicting Depression, Anxiety, and Stress Levels from Videos Using the Facial Action Coding System. SENSORS (BASEL, SWITZERLAND) 2019; 19:E3693. [PMID: 31450687 PMCID: PMC6749518 DOI: 10.3390/s19173693] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 08/11/2019] [Accepted: 08/20/2019] [Indexed: 12/24/2022]
Abstract
We present the first study in the literature that has aimed to determine Depression Anxiety Stress Scale (DASS) levels by analyzing facial expressions using Facial Action Coding System (FACS) by means of a unique noninvasive architecture on three layers designed to offer high accuracy and fast convergence: in the first layer, Active Appearance Models (AAM) and a set of multiclass Support Vector Machines (SVM) are used for Action Unit (AU) classification; in the second layer, a matrix is built containing the AUs' intensity levels; and in the third layer, an optimal feedforward neural network (FFNN) analyzes the matrix from the second layer in a pattern recognition task, predicting the DASS levels. We obtained 87.2% accuracy for depression, 77.9% for anxiety, and 90.2% for stress. The average prediction time was 64 s, and the architecture could be used in real time, allowing health practitioners to evaluate the evolution of DASS levels over time. The architecture could discriminate with 93% accuracy between healthy subjects and those affected by Major Depressive Disorder (MDD) or Post-traumatic Stress Disorder (PTSD), and 85% for Generalized Anxiety Disorder (GAD). For the first time in the literature, we determined a set of correlations between DASS, induced emotions, and FACS, which led to an increase in accuracy of 5%. When tested on AVEC 2014 and ANUStressDB, the method offered 5% higher accuracy, sensitivity, and specificity compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Mihai Gavrilescu
- Department of Telecommunications, Faculty of Electronics, Telecommunications and Information Technology, University "Politehnica", Bucharest 061071, Romania.
| | - Nicolae Vizireanu
- Department of Telecommunications, Faculty of Electronics, Telecommunications and Information Technology, University "Politehnica", Bucharest 061071, Romania
| |
Collapse
|
30
|
Harati S, Crowell A, Huang Y, Mayberg H, Nemati S. Classifying Depression Severity in Recovery From Major Depressive Disorder via Dynamic Facial Features. IEEE J Biomed Health Inform 2019; 24:815-824. [PMID: 31352356 DOI: 10.1109/jbhi.2019.2930604] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Major depressive disorder is a common psychiatric illness. At present, there are no objective, non-verbal, automated markers that can reliably track treatment response. Here, we explore the use of video analysis of facial expressivity in a cohort of severely depressed patients before and after deep brain stimulation (DBS), an experimental treatment for depression. We introduced a set of variability measurements to obtain unsupervised features from muted video recordings, which were then leveraged to build predictive models to classify three levels of severity in the patients' recovery from depression. Multiscale entropy was utilized to estimate the variability in pixel intensity level at various time scales. A dynamic latent variable model was utilized to learn a low-dimensional representation of factors that describe the dynamic relationship between high-dimensional pixels in each video frame and over time. Finally, a novel elastic net ordinal regression model was trained to predict the severity of depression, as independently rated by standard rating scales. Our results suggest that unsupervised features extracted from these video recordings, when incorporated in an ordinal regression predictor, can discriminate different levels of depression severity during ongoing DBS treatment. Objective markers of patient response to treatment have the potential to standardize treatment protocols and enhance the design of future clinical trials.
Collapse
|
31
|
Babrak LM, Menetski J, Rebhan M, Nisato G, Zinggeler M, Brasier N, Baerenfaller K, Brenzikofer T, Baltzer L, Vogler C, Gschwind L, Schneider C, Streiff F, Groenen PM, Miho E. Traditional and Digital Biomarkers: Two Worlds Apart? Digit Biomark 2019; 3:92-102. [PMID: 32095769 PMCID: PMC7015353 DOI: 10.1159/000502000] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 07/08/2019] [Indexed: 11/19/2022] Open
Abstract
The identification and application of biomarkers in the clinical and medical fields has an enormous impact on society. The increase of digital devices and the rise in popularity of health-related mobile apps has produced a new trove of biomarkers in large, diverse, and complex data. However, the unclear definition of digital biomarkers, population groups, and their intersection with traditional biomarkers hinders their discovery and validation. We have identified current issues in the field of digital biomarkers and put forth suggestions to address them during the DayOne Workshop with participants from academia and industry. We have found similarities and differences between traditional and digital biomarkers in order to synchronize semantics, define unique features, review current regulatory procedures, and describe novel applications that enable precision medicine.
Collapse
Affiliation(s)
- Lmar M. Babrak
- FHNW University of Applied Sciences Northwestern Switzerland, Muttenz, Switzerland
| | - Joseph Menetski
- Foundation for the National Institutes of Health, North Bethesda, Maryland, USA
| | - Michael Rebhan
- Novartis Institutes for Biomedical Research, Basel, Switzerland
- DayOne, BaselArea. Swiss, Basel, Switzerland
| | | | | | - Noé Brasier
- CMIO Research Group, University Hospital Basel, Basel, Switzerland
| | - Katja Baerenfaller
- Swiss Institute of Allergy and Asthma Research (SIAF), University of Zurich, and Swiss Institute of Bioinformatics (SIB), Davos, Switzerland
| | | | | | | | | | - Cornelia Schneider
- Clinical Pharmacy and Epidemiology, Department of Pharmaceutical Sciences, University of Basel, Basel, Switzerland
- Hospital Pharmacy, University Hospital Basel, Basel, Switzerland
| | | | - Peter M.A. Groenen
- DayOne, BaselArea. Swiss, Basel, Switzerland
- Idorsia Pharmaceuticals Ltd., Translational Science, Allschwil, Switzerland
| | - Enkelejda Miho
- FHNW University of Applied Sciences Northwestern Switzerland, Muttenz, Switzerland
- DayOne, BaselArea. Swiss, Basel, Switzerland
- aiNET GmbH, Basel, Switzerland
| |
Collapse
|
32
|
Barkus E, Badcock JC. A Transdiagnostic Perspective on Social Anhedonia. Front Psychiatry 2019; 10:216. [PMID: 31105596 PMCID: PMC6491888 DOI: 10.3389/fpsyt.2019.00216] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2018] [Accepted: 03/25/2019] [Indexed: 12/28/2022] Open
Abstract
Humans are highly social beings, yet people with social anhedonia experience reduced interest in or reward from social situations. Social anhedonia is a key facet of schizotypal personality, an important symptom of schizophrenia, and increasingly recognized as an important feature in a range of other psychological disorders. However, to date, there has been little examination of the similarities and differences in social anhedonia across diagnostic borders. Here, our goal was to conduct a selective review of social anhedonia in different psychological and life course contexts, including the psychosis continuum, depressive disorder, posttraumatic stress disorder, eating disorders, and autism spectrum disorders, along with developmental and neurobiological factors. Current evidence suggests that the nature and expression of social anhedonia vary across psychological disorders with some groups showing deficient learning about, enjoyment from, and anticipation of the pleasurable aspects of social interactions, while for others, some of these components appear to remain intact. However, study designs and methodologies are diverse, the roles of developmental and neurobiological factors are not routinely considered, and direct comparisons between diagnostic groups are rare-which prevents a more nuanced understanding of the underlying mechanisms involved. Future studies, parsing the wanting, liking, and learning components of social reward, will help to fill gaps in the current knowledge base. Consistent across disorders is diminished pleasure from social situations, subsequent withdrawal, and poorer social functioning in those who express social anhedonia. Nonetheless, feelings of loneliness often remain, which suggests the need for social connection is not entirely absent. Adolescence is a particularly important period of social and neural development and may provide a valuable window on the developmental origins of social anhedonia. Adaptive social functioning is key to recovery from mental health disorders; therefore, understanding the intricacies of social anhedonia will help to inform treatment and prevention strategies for a range of diagnostic categories.
Collapse
Affiliation(s)
- Emma Barkus
- Cognitive Basis of Atypical Behaviour Initiative (CBABi), School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| | - Johanna C. Badcock
- Centre for Clinical Research in Neuropsychiatry (CCRN), Division of Psychiatry, Faculty of Health and Medical Sciences, The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
33
|
Rana R, Latif S, Gururajan R, Gray A, Mackenzie G, Humphris G, Dunn J. Automated screening for distress: A perspective for the future. Eur J Cancer Care (Engl) 2019; 28:e13033. [DOI: 10.1111/ecc.13033] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 02/05/2019] [Accepted: 02/18/2019] [Indexed: 01/13/2023]
Affiliation(s)
- Rajib Rana
- University of Southern Queensland Springfield Queensland Australia
| | - Siddique Latif
- University of Southern Queensland Springfield Queensland Australia
| | - Raj Gururajan
- University of Southern Queensland Springfield Queensland Australia
| | - Anthony Gray
- University of Southern Queensland Springfield Queensland Australia
| | | | | | - Jeff Dunn
- University of Southern Queensland Springfield Queensland Australia
- Griffith University Brisbane Queensland Australia
- University of Technology Sydney Sydney New South Wales Australia
| |
Collapse
|
34
|
Costa-Abreu MD, Bezerra GS. FAMOS: a framework for investigating the use of face features to identify spontaneous emotions. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0675-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
35
|
Region-based facial representation for real-time Action Units intensity detection across datasets. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0645-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
36
|
Leppanen J, Dapelo MM, Davies H, Lang K, Treasure J, Tchanturia K. Computerised analysis of facial emotion expression in eating disorders. PLoS One 2017; 12:e0178972. [PMID: 28575109 PMCID: PMC5456367 DOI: 10.1371/journal.pone.0178972] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 05/22/2017] [Indexed: 11/24/2022] Open
Abstract
Background Problems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders (EDs). Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa (AN). Less is known about facial expressivity in bulimia nervosa (BN) and in people who have recovered from AN (RecAN). This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants. Method 297 participants with AN, BN, RecAN, and healthy controls were recruited. Participants watched film clips designed to elicit happy or sad emotions, and facial expressions were then analysed using FaceReader. Results The finding mirrored those from previous work showing that healthy control and RecAN participants expressed significantly more positive emotions during the positive clip compared to the AN group. There were no differences in emotion expression during the sad film clip. Discussion These findings support the use of computerised methods to analyse emotion expression in EDs. The findings also demonstrate that reduced positive emotion expression is likely to be associated with the acute stage of AN illness, with individuals with BN showing an intermediate profile.
Collapse
Affiliation(s)
- Jenni Leppanen
- Department of Psychological Medicine, Institute of Psychology, Psychiatry, and Neuroscience, King’s College London, London, United Kingdom
| | - Marcela Marin Dapelo
- Department of Psychological Medicine, Institute of Psychology, Psychiatry, and Neuroscience, King’s College London, London, United Kingdom
| | - Helen Davies
- Department of Psychological Medicine, Institute of Psychology, Psychiatry, and Neuroscience, King’s College London, London, United Kingdom
| | - Katie Lang
- Department of Psychological Medicine, Institute of Psychology, Psychiatry, and Neuroscience, King’s College London, London, United Kingdom
| | - Janet Treasure
- Department of Psychological Medicine, Institute of Psychology, Psychiatry, and Neuroscience, King’s College London, London, United Kingdom
| | - Kate Tchanturia
- Department of Psychological Medicine, Institute of Psychology, Psychiatry, and Neuroscience, King’s College London, London, United Kingdom
- Illia State University, Department of Psychology, Tbilisi, Georgia
- * E-mail:
| |
Collapse
|
37
|
Tuck NL, Grant RCI, Sollers JJ, Booth RJ, Consedine NS. Higher resting heart rate variability predicts skill in expressing some emotions. Psychophysiology 2016; 53:1852-1857. [DOI: 10.1111/psyp.12755] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Accepted: 08/12/2016] [Indexed: 11/30/2022]
Affiliation(s)
- Natalie L. Tuck
- Department of Psychological Medicine, Faculty of Medical and Health Sciences; University of Auckland; Auckland New Zealand
| | | | - John J. Sollers
- Department of Psychological Medicine, Faculty of Medical and Health Sciences; University of Auckland; Auckland New Zealand
| | - Roger J. Booth
- Molecular Medicine & Pathology, Faculty of Medical and Health Sciences; University of Auckland; Auckland New Zealand
| | - Nathan S. Consedine
- Department of Psychological Medicine, Faculty of Medical and Health Sciences; University of Auckland; Auckland New Zealand
| |
Collapse
|
38
|
Girard JM, Cohn JF, Jeni LA, Sayette MA, De la Torre F. Spontaneous facial expression in unscripted social interactions can be measured automatically. Behav Res Methods 2015; 47:1136-1147. [PMID: 25488104 PMCID: PMC4461567 DOI: 10.3758/s13428-014-0536-1] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.
Collapse
Affiliation(s)
- Jeffrey M Girard
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
| | - Jeffrey F Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 15260, USA
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Laszlo A Jeni
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Michael A Sayette
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | | |
Collapse
|
39
|
Abstract
Both the occurrence and intensity of facial expressions are critical to what the face reveals. While much progress has been made towards the automatic detection of facial expression occurrence, controversy exists about how to estimate expression intensity. The most straight-forward approach is to train multiclass or regression models using intensity ground truth. However, collecting intensity ground truth is even more time consuming and expensive than collecting binary ground truth. As a shortcut, some researchers have proposed using the decision values of binary-trained maximum margin classifiers as a proxy for expression intensity. We provide empirical evidence that this heuristic is flawed in practice as well as in theory. Unfortunately, there are no shortcuts when it comes to estimating smile intensity: researchers must take the time to collect and train on intensity ground truth. However, if they do so, high reliability with expert human coders can be achieved. Intensity-trained multiclass and regression models outperformed binary-trained classifier decision values on smile intensity estimation across multiple databases and methods for feature extraction and dimensionality reduction. Multiclass models even outperformed binary-trained classifiers on smile occurrence detection.
Collapse
Affiliation(s)
- Jeffrey M. Girard
- Department of Psychology, University of Pittsburgh, 4322 Sennott Square, Pittsburgh, PA, USA 15260
| | - Jeffrey F. Cohn
- Department of Psychology, University of Pittsburgh, 4322 Sennott Square, Pittsburgh, PA, USA 15260
- The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, USA 15213
| | - Fernando De la Torre
- The Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, USA 15213
| |
Collapse
|