1
|
Boucher A, Peters M, Jones GB. How Digital Solutions Might Provide a World of New Opportunities for Holistic and Empathic Support of Patients with Hidradenitis Suppurativa. Dermatol Ther (Heidelb) 2024:10.1007/s13555-024-01234-9. [PMID: 39042318 DOI: 10.1007/s13555-024-01234-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 07/08/2024] [Indexed: 07/24/2024] Open
Abstract
Hidradenitis suppurativa (HS) is a complex chronic relapsing inflammatory condition anchored in the hair follicle wherein painful abscesses, nodules, and tunnels form under the skin with the potential for intermittent pus drainage and tissue scarring. Current estimates of incidence are 1-4% globally with the disease three times more prevalent in women and higher rates among Black populations. Patients with HS are also more likely to suffer from depression, anxiety, and loneliness underscoring the need for carefully approached strategies on disease awareness and interventions. Delays in formal diagnosis, which have been estimated at 7-10 years on average, impede timely provision of optimal care. Despite best intent, when patients present at a physician's office, stigmas relating to physical appearance can be exacerbated by negative interactions experienced by patients. In addition to long wait times and the dearth of available HS expert dermatology professionals, patients perceive heightened physician focus on two of the HS flare risk factors (smoking and body mass index [BMI]) as negatively impacting their care. Given the need for continual, personal, and sensitive patient support, herein we advocate for re-examination of approach to care and the leveraging of highly personalized digital support solutions. New medications which can directly or indirectly control elements of the disease and its comorbidities are also entering the marketplace. Collectively, we posit that these new developments provide opportunity for a holistic approach for patients with HS, leading to long-term engagement and improved outcomes.
Collapse
Affiliation(s)
- Annie Boucher
- Novartis Pharma AG, Lichtstrasse 35, 4056, Basel, Switzerland
| | - Martin Peters
- Novartis Pharma AG, Lichtstrasse 35, 4056, Basel, Switzerland
| | - Graham B Jones
- Novartis Pharmaceuticals, 250 Massachusetts Avenue, Cambridge, MA, 02139, USA.
- Clinical and Translational Science Institute, Tufts University Medical Center, 800 Washington Street, Boston, MA, 02111, USA.
| |
Collapse
|
2
|
Durtette A, Schmid F, Barrière S, Obert A, Lang J, Raucher-Chéné D, Gierski F, Kaladjian A, Henry A. Facial emotion recognition processes according to schizotypal personality traits: An eye-tracking study. Int J Psychophysiol 2023; 190:60-68. [PMID: 37385101 DOI: 10.1016/j.ijpsycho.2023.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 05/04/2023] [Accepted: 06/11/2023] [Indexed: 07/01/2023]
Abstract
Facial emotion recognition has been shown to be impaired among patients with schizophrenia and, to a lesser extent, among individuals with high levels of schizotypal personality traits. However, aspects of gaze behavior during facial emotion recognition among the latter are still unclear. This study therefore investigated the relations between eye movements and facial emotion recognition among nonclinical individuals with schizotypal personality traits. A total of 83 nonclinical participants completed the Schizotypal Personality Questionnaire (SPQ) and performed a facial emotion recognition task. Their gaze behavior was recorded by an eye-tracker. Self-report questionnaires measuring anxiety, depressive symptoms, and alexithymia were administered. At the behavioral level, correlation analyses showed that higher SPQ scores were associated with lower surprise recognition accuracy scores. Eye-tracking data revealed that higher SPQ scores were associated with shorter dwell time on relevant facial features during sadness recognition. Regression analyses revealed that the total SPQ score was the only significant predictor of eye movements during sadness recognition, and depressive symptoms were the only significant predictor of surprise recognition accuracy. Furthermore, dwell time predicted response times for sadness recognition in that shorter dwell time on relevant facial features was associated with longer response times. Schizotypal traits may be associated with decreased attentional engagement in relevant facial features during sadness recognition and impede participants' response times. Slower processing and altered gaze patterns during the processing of sad faces could lead to difficulties in everyday social situations in which information must be rapidly processed to enable the successful interpretation of other people's behavior.
Collapse
Affiliation(s)
- Apolline Durtette
- Université de Reims Champagne Ardenne, Laboratoire Cognition, Santé et Société, B.P. 30, 57 Rue Pierre Taittinger, 51571 Reims Cedex, France.
| | - Franca Schmid
- Université de Reims Champagne Ardenne, Laboratoire Cognition, Santé et Société, B.P. 30, 57 Rue Pierre Taittinger, 51571 Reims Cedex, France.
| | - Sarah Barrière
- Pôle Universitaire de Psychiatrie, EPSM et CHU de Reims, 8 Rue Roger Aubry, 51100 Reims, France.
| | - Alexandre Obert
- Institut national universitaire Champollion, Université de Toulouse, Laboratoire Sciences de la cognition, Technologie, Ergonomie, Place de Verdun, 81000 Albi, France.
| | - Julie Lang
- Pôle Universitaire de Psychiatrie, EPSM et CHU de Reims, 8 Rue Roger Aubry, 51100 Reims, France.
| | - Delphine Raucher-Chéné
- Université de Reims Champagne Ardenne, Laboratoire Cognition, Santé et Société, B.P. 30, 57 Rue Pierre Taittinger, 51571 Reims Cedex, France; Douglas Mental Health University Institute, McGill University, 6875 Boulevard LaSalle, Montreal, Canada.
| | - Fabien Gierski
- Université de Reims Champagne Ardenne, Laboratoire Cognition, Santé et Société, B.P. 30, 57 Rue Pierre Taittinger, 51571 Reims Cedex, France; Pôle Universitaire de Psychiatrie, EPSM et CHU de Reims, 8 Rue Roger Aubry, 51100 Reims, France.
| | - Arthur Kaladjian
- Université de Reims Champagne Ardenne, Laboratoire Cognition, Santé et Société, B.P. 30, 57 Rue Pierre Taittinger, 51571 Reims Cedex, France; Pôle Universitaire de Psychiatrie, EPSM et CHU de Reims, 8 Rue Roger Aubry, 51100 Reims, France; Université de Reims Champagne Ardenne, Faculté de Médicine, 51 rue Cognacq-Jay, 51100, Reims, France.
| | - Audrey Henry
- Université de Reims Champagne Ardenne, Laboratoire Cognition, Santé et Société, B.P. 30, 57 Rue Pierre Taittinger, 51571 Reims Cedex, France; Pôle Universitaire de Psychiatrie, EPSM et CHU de Reims, 8 Rue Roger Aubry, 51100 Reims, France.
| |
Collapse
|
3
|
Xie X, Cai J, Fang H, Wang B, He H, Zhou Y, Xiao Y, Yamanaka T, Li X. Affective Impressions Recognition under Different Colored Lights Based on Physiological Signals and Subjective Evaluation Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115322. [PMID: 37300049 DOI: 10.3390/s23115322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023]
Abstract
The design of the light environment plays a critical role in the interaction between people and visual objects in space. Adjusting the space's light environment to regulate emotional experience is more practical for the observers under lighting conditions. Although lighting plays a vital role in spatial design, the effects of colored lights on individuals' emotional experiences are still unclear. This study combined physiological signal (galvanic skin response (GSR) and electrocardiography (ECG)) measurements and subjective assessments to detect the changes in the mood states of observers under four sets of lighting conditions (green, blue, red, and yellow). At the same time, two sets of abstract and realistic images were designed to discuss the relationship between light and visual objects and their influence on individuals' impressions. The results showed that different light colors significantly affected mood, with red light having the most substantial emotional arousal, then blue and green. In addition, GSR and ECG measurements were significantly correlated with impressions evaluation results of interest, comprehension, imagination, and feelings in subjective evaluation. Therefore, this study explores the feasibility of combining the measurement of GSR and ECG signals with subjective evaluations as an experimental method of light, mood, and impressions, which provided empirical evidence for regulating individuals' emotional experiences.
Collapse
Affiliation(s)
- Xing Xie
- School of Art and Design, Guangdong University of Technology, Guangzhou 510000, China
| | - Jun Cai
- School of Art and Design, Guangdong University of Technology, Guangzhou 510000, China
- Academy of Arts and Design, Tsinghua University, Beijing 100086, China
| | - Hai Fang
- School of Art and Design, Guangdong University of Technology, Guangzhou 510000, China
| | - Beibei Wang
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| | - Huan He
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| | - Yuanzhi Zhou
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| | - Yang Xiao
- School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou 510006, China
| | | | - Xinming Li
- Guangdong Provincial Key Laboratory of Nanophotonic Functional Materials and Devices, School of Information and Optoelectronic Science and Engineering, South China Normal University, Guangzhou 510006, China
| |
Collapse
|
4
|
Braund TA, O'Dea B, Bal D, Maston K, Larsen M, Werner-Seidler A, Tillman G, Christensen H. Associations Between Smartphone Keystroke Metadata and Mental Health Symptoms in Adolescents: Findings From the Future Proofing Study. JMIR Ment Health 2023; 10:e44986. [PMID: 37184904 DOI: 10.2196/44986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 02/24/2023] [Accepted: 03/22/2023] [Indexed: 05/16/2023] Open
Abstract
BACKGROUND Mental disorders are prevalent during adolescence. Among the digital phenotypes currently being developed to monitor mental health symptoms, typing behavior is one promising candidate. However, few studies have directly assessed associations between typing behavior and mental health symptom severity, and whether these relationships differs between genders. OBJECTIVE In a cross-sectional analysis of a large cohort, we tested whether various features of typing behavior derived from keystroke metadata were associated with mental health symptoms and whether these relationships differed between genders. METHODS A total of 934 adolescents from the Future Proofing study undertook 2 typing tasks on their smartphones through the Future Proofing app. Common keystroke timing and frequency features were extracted across tasks. Mental health symptoms were assessed using the Patient Health Questionnaire-Adolescent version, the Children's Anxiety Scale-Short Form, the Distress Questionnaire 5, and the Insomnia Severity Index. Bivariate correlations were used to test whether keystroke features were associated with mental health symptoms. The false discovery rates of P values were adjusted to q values. Machine learning models were trained and tested using independent samples (ie, 80% train 20% test) to identify whether keystroke features could be combined to predict mental health symptoms. RESULTS Keystroke timing features showed a weak negative association with mental health symptoms across participants. When split by gender, females showed weak negative relationships between keystroke timing features and mental health symptoms, and weak positive relationships between keystroke frequency features and mental health symptoms. The opposite relationships were found for males (except for dwell). Machine learning models using keystroke features alone did not predict mental health symptoms. CONCLUSIONS Increased mental health symptoms are weakly associated with faster typing, with important gender differences. Keystroke metadata should be collected longitudinally and combined with other digital phenotypes to enhance their clinical relevance. TRIAL REGISTRATION Australian and New Zealand Clinical Trial Registry, ACTRN12619000855123; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377664&isReview=true.
Collapse
Affiliation(s)
- Taylor A Braund
- Faculty of Medicine and Health, University of New South Wales, Kensington, Australia
- Black Dog Institute, University of New South Wales, Randwick, Australia
| | - Bridianne O'Dea
- Faculty of Medicine and Health, University of New South Wales, Kensington, Australia
- Black Dog Institute, University of New South Wales, Randwick, Australia
| | - Debopriyo Bal
- Black Dog Institute, University of New South Wales, Randwick, Australia
| | - Kate Maston
- Black Dog Institute, University of New South Wales, Randwick, Australia
| | - Mark Larsen
- Faculty of Medicine and Health, University of New South Wales, Kensington, Australia
- Black Dog Institute, University of New South Wales, Randwick, Australia
| | - Aliza Werner-Seidler
- Faculty of Medicine and Health, University of New South Wales, Kensington, Australia
- Black Dog Institute, University of New South Wales, Randwick, Australia
| | - Gabriel Tillman
- Institute of Health and Wellbeing, Federation University, Ballarat, Australia
| | - Helen Christensen
- Faculty of Medicine and Health, University of New South Wales, Kensington, Australia
- Black Dog Institute, University of New South Wales, Randwick, Australia
| |
Collapse
|
5
|
Máté Á, Rakovics Z, Rudas S, Wallis L, Ságvári B, Huszár Á, Koltai J. Willingness of Participation in an Application-Based Digital Data Collection among Different Social Groups and Smartphone User Clusters. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094571. [PMID: 37177775 PMCID: PMC10181725 DOI: 10.3390/s23094571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 04/29/2023] [Indexed: 05/15/2023]
Abstract
The main question of this paper is what factors influence willingness to participate in a smartphone-application-based data collection where participants both fill out a questionnaire and let the app collect data on their smartphone usage. Passive digital data collection is becoming more common, but it is still a new form of data collection. Due to the novelty factor, it is important to investigate how willingness to participate in such studies is influenced by both socio-economic variables and smartphone usage behaviour. We estimate multilevel models based on a survey experiment with vignettes for different characteristics of data collection (e.g., different incentives, duration of the study). Our results show that of the socio-demographic variables, age has the largest influence, with younger age groups having a higher willingness to participate than older ones. Smartphone use also has an impact on participation. Advanced users are more likely to participate, while users who only use the basic functions of their device are less likely to participate than those who use it mainly for social media. Finally, the explorative analysis with interaction terms between levels has shown that the circumstances of data collection matter differently for different social groups. These findings provide important clues on how to fine-tune circumstances to improve participation rates in this novel passive digital data collection.
Collapse
Affiliation(s)
- Ákos Máté
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
- Institute for Political Science, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
| | - Zsófia Rakovics
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
- Department of Social Research Methodology, Institute of Empirical Studies, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
| | - Szilvia Rudas
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
| | - Levente Wallis
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
| | - Bence Ságvári
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
- Computational Social Science-Research Center for Educational and Network Studies (CSS-RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
- Department of Sociology, Institute of Social and Political Sciences, Corvinus University of Budapest, Fővám tér 8, 1093 Budapest, Hungary
| | - Ákos Huszár
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
- Institute for Sociology, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
| | - Júlia Koltai
- MTA-TK Lendület "Momentum" Digital Social Science Research Group for Social Stratification, Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
- Department of Social Research Methodology, Institute of Empirical Studies, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
| |
Collapse
|
6
|
Toyoshima I, Okada Y, Ishimaru M, Uchiyama R, Tada M. Multi-Input Speech Emotion Recognition Model Using Mel Spectrogram and GeMAPS. SENSORS (BASEL, SWITZERLAND) 2023; 23:1743. [PMID: 36772782 PMCID: PMC9920472 DOI: 10.3390/s23031743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/25/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
The existing research on emotion recognition commonly uses mel spectrogram (MelSpec) and Geneva minimalistic acoustic parameter set (GeMAPS) as acoustic parameters to learn the audio features. MelSpec can represent the time-series variations of each frequency but cannot manage multiple types of audio features. On the other hand, GeMAPS can handle multiple audio features but fails to provide information on their time-series variations. Thus, this study proposes a speech emotion recognition model based on a multi-input deep neural network that simultaneously learns these two audio features. The proposed model comprises three parts, specifically, for learning MelSpec in image format, learning GeMAPS in vector format, and integrating them to predict the emotion. Additionally, a focal loss function is introduced to address the imbalanced data problem among the emotion classes. The results of the recognition experiments demonstrate weighted and unweighted accuracies of 0.6657 and 0.6149, respectively, which are higher than or comparable to those of the existing state-of-the-art methods. Overall, the proposed model significantly improves the recognition accuracy of the emotion "happiness", which has been difficult to identify in previous studies owing to limited data. Therefore, the proposed model can effectively recognize emotions from speech and can be applied for practical purposes with future development.
Collapse
Affiliation(s)
- Itsuki Toyoshima
- Division of Information and Electronic Engineering, Muroran Institute of Technology, 27-1, Mizumoto-cho, Muroran 050-8585, Hokkaido, Japan
| | - Yoshifumi Okada
- College of Information and Systems, Muroran Institute of Technology, 27-1, Mizumoto-cho, Muroran 050-8585, Hokkaido, Japan
| | - Momoko Ishimaru
- Division of Information and Electronic Engineering, Muroran Institute of Technology, 27-1, Mizumoto-cho, Muroran 050-8585, Hokkaido, Japan
| | - Ryunosuke Uchiyama
- Division of Information and Electronic Engineering, Muroran Institute of Technology, 27-1, Mizumoto-cho, Muroran 050-8585, Hokkaido, Japan
| | - Mayu Tada
- Division of Information and Electronic Engineering, Muroran Institute of Technology, 27-1, Mizumoto-cho, Muroran 050-8585, Hokkaido, Japan
| |
Collapse
|
7
|
Emotion Detection Based on Pupil Variation. Healthcare (Basel) 2023; 11:healthcare11030322. [PMID: 36766898 PMCID: PMC9914860 DOI: 10.3390/healthcare11030322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 01/24/2023] Open
Abstract
Emotion detection is a fundamental component in the field of Affective Computing. Proper recognition of emotions can be useful in improving the interaction between humans and machines, for instance, with regard to designing effective user interfaces. This study aims to understand the relationship between emotion and pupil dilation. The Tobii Pro X3-120 eye tracker was used to collect pupillary responses from 30 participants exposed to content designed to evoke specific emotions. Six different video scenarios were selected and presented to participants, whose pupillary responses were measured while watching the material. In total, 16 data features (8 features per eye) were extracted from the pupillary response distribution during content exposure. Through logistical regression, a maximum of 76% classification accuracy was obtained through the measurement of pupillary response in predicting emotions classified as fear, anger, or surprise. Further research is required to precisely calculate pupil size variations in relation to emotionally evocative input in affective computing applications.
Collapse
|
8
|
Liu P, Wang Y, Hu J, Qing L, Zhao K. Development and validation of a highly dynamic and reusable picture-based scale: A new affective measurement tool. Front Psychol 2023; 13:1078691. [PMID: 36733871 PMCID: PMC9888759 DOI: 10.3389/fpsyg.2022.1078691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
Emotion measurement is crucial to conducting emotion research. Numerous studies have extensively employed textual scales for psychological and organizational behavior research. However, emotions are transient states of organisms with relatively short duration, some insurmountable limitations of textual scales have been reported, including low reliability for single measurement or susceptibility to learning effects for multiple repeated use. In the present article, we introduce the Highly Dynamic and Reusable Picture-based Scale (HDRPS), which was randomly generated based on 3,386 realistic, high-quality photographs that are divided into five categories (people, animals, plants, objects, and scenes). Affective ratings of the photographs were gathered from 14 experts and 209 professional judges. The HDRPS was validated using the Self-Assessment Manikin and the PANAS by 751 participants. With an accuracy of 89.73%, this new tool allows researchers to measure individual emotions continuously for their research. The non-commercial use of the HDRPS system can be freely accessible by request at http://syy.imagesoft.cc:8989/Pictures.7z. HDRPS is used for non-commercial academic research only. As some of the images are collected through the open network, it is difficult to trace the source, so please contact the author if there are any copyright issues.
Collapse
Affiliation(s)
- Ping Liu
- Business School, Sichuan University, Chengdu, China
| | - Ya’nan Wang
- Business School, Sichuan University, Chengdu, China,*Correspondence: Ya’nan Wang,
| | | | - Lin’bo Qing
- College of Electronics and Information Engineering, Sichuan University, Chengdu, China
| | - Ke Zhao
- College of Electronics and Information Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
9
|
Kowalczuk Z, Czubenko M, Żmuda‐Trzebiatowska W. Categorization of emotions in dog behavior based on the deep neural network. Comput Intell 2022. [DOI: 10.1111/coin.12559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Zdzisław Kowalczuk
- Faculty of Electronics Telecommunications and Informatics Gdansk University of Technology Pomorskie Poland
| | - Michał Czubenko
- Faculty of Electronics Telecommunications and Informatics Gdansk University of Technology Pomorskie Poland
| | | |
Collapse
|
10
|
Thurzo A, Strunga M, Havlínová R, Reháková K, Urban R, Surovková J, Kurilová V. Smartphone-Based Facial Scanning as a Viable Tool for Facially Driven Orthodontics? SENSORS (BASEL, SWITZERLAND) 2022; 22:s22207752. [PMID: 36298103 PMCID: PMC9607180 DOI: 10.3390/s22207752] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/05/2022] [Accepted: 10/11/2022] [Indexed: 05/28/2023]
Abstract
The current paradigm shift in orthodontic treatment planning is based on facially driven diagnostics. This requires an affordable, convenient, and non-invasive solution for face scanning. Therefore, utilization of smartphones' TrueDepth sensors is very tempting. TrueDepth refers to front-facing cameras with a dot projector in Apple devices that provide real-time depth data in addition to visual information. There are several applications that tout themselves as accurate solutions for 3D scanning of the face in dentistry. Their clinical accuracy has been uncertain. This study focuses on evaluating the accuracy of the Bellus3D Dental Pro app, which uses Apple's TrueDepth sensor. The app reconstructs a virtual, high-resolution version of the face, which is available for download as a 3D object. In this paper, sixty TrueDepth scans of the face were compared to sixty corresponding facial surfaces segmented from CBCT. Difference maps were created for each pair and evaluated in specific facial regions. The results confirmed statistically significant differences in some facial regions with amplitudes greater than 3 mm, suggesting that current technology has limited applicability for clinical use. The clinical utilization of facial scanning for orthodontic evaluation, which does not require accuracy in the lip region below 3 mm, can be considered.
Collapse
Affiliation(s)
- Andrej Thurzo
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
| | - Martin Strunga
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
| | - Romana Havlínová
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
| | - Katarína Reháková
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
| | - Renata Urban
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
| | - Jana Surovková
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
| | - Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 81219 Bratislava, Slovakia
| |
Collapse
|
11
|
Li R, Yuizono T, Li X. Affective computing of multi-type urban public spaces to analyze emotional quality using ensemble learning-based classification of multi-sensor data. PLoS One 2022; 17:e0269176. [PMID: 35657805 PMCID: PMC9165821 DOI: 10.1371/journal.pone.0269176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 05/15/2022] [Indexed: 11/18/2022] Open
Abstract
The quality of urban public spaces affects the emotional response of users; therefore, the emotional data of users can be used as indices to evaluate the quality of a space. Emotional response can be evaluated to effectively measure public space quality through affective computing and obtain evidence-based support for urban space renewal. We proposed a feasible evaluation method for multi-type urban public spaces based on multiple physiological signals and ensemble learning. We built binary, ternary, and quinary classification models based on participants’ physiological signals and self-reported emotional responses through experiments in eight public spaces of five types. Furthermore, we verified the effectiveness of the model by inputting data collected from two other public spaces. Three observations were made based on the results. First, the highest accuracies of the binary and ternary classification models were 92.59% and 91.07%, respectively. After external validation, the highest accuracies were 80.90% and 65.30%, respectively, which satisfied the preliminary requirements for evaluating the quality of actual urban spaces. However, the quinary classification model could not satisfy the preliminary requirements. Second, the average accuracy of ensemble learning was 7.59% higher than that of single classifiers. Third, reducing the number of physiological signal features and applying the synthetic minority oversampling technique to solve unbalanced data improved the evaluation ability.
Collapse
Affiliation(s)
- Ruixuan Li
- School of Art and Design, Dalian Polytechnic University, Dalian City, Liaoning Province, China
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
- * E-mail:
| | - Takaya Yuizono
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
| | - Xianghui Li
- School of Art and Design, Dalian Polytechnic University, Dalian City, Liaoning Province, China
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Ishikawa, Japan
| |
Collapse
|
12
|
Datasets for Automated Affect and Emotion Recognition from Cardiovascular Signals Using Artificial Intelligence- A Systematic Review. SENSORS 2022; 22:s22072538. [PMID: 35408149 PMCID: PMC9002643 DOI: 10.3390/s22072538] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 03/21/2022] [Accepted: 03/22/2022] [Indexed: 02/04/2023]
Abstract
Simple Summary We reviewed the literature on the publicly available datasets used to automatically recognise emotion and affect using artificial intelligence (AI) techniques. We were particularly interested in databases with cardiovascular (CV) data. Additionally, we assessed the quality of the included papers. We searched the sources until 31 August 2020. Each step of identification was carried out independently by two reviewers to maintain the credibility of our review. In case of disagreement, we discussed them. Each action was first planned and described in a protocol that we posted on the Open Science Framework (OSF) platform. We selected 18 works focused on providing datasets of CV signals for automated affect and emotion recognition. In total, data for 812 participants aged 17 to 47 were analysed. The most frequently recorded signal was electrocardiography. The authors most often used video stimulation. Noticeably, we did not find much necessary information in many of the works, resulting in mainly low quality among included papers. Researchers in this field should focus more on how they carry out experiments. Abstract Our review aimed to assess the current state and quality of publicly available datasets used for automated affect and emotion recognition (AAER) with artificial intelligence (AI), and emphasising cardiovascular (CV) signals. The quality of such datasets is essential to create replicable systems for future work to grow. We investigated nine sources up to 31 August 2020, using a developed search strategy, including studies considering the use of AI in AAER based on CV signals. Two independent reviewers performed the screening of identified records, full-text assessment, data extraction, and credibility. All discrepancies were resolved by discussion. We descriptively synthesised the results and assessed their credibility. The protocol was registered on the Open Science Framework (OSF) platform. Eighteen records out of 195 were selected from 4649 records, focusing on datasets containing CV signals for AAER. Included papers analysed and shared data of 812 participants aged 17 to 47. Electrocardiography was the most explored signal (83.33% of datasets). Authors utilised video stimulation most frequently (52.38% of experiments). Despite these results, much information was not reported by researchers. The quality of the analysed papers was mainly low. Researchers in the field should concentrate more on methodology.
Collapse
|
13
|
Šumak B, Brdnik S, Pušnik M. Sensors and Artificial Intelligence Methods and Algorithms for Human-Computer Intelligent Interaction: A Systematic Mapping Study. SENSORS 2021; 22:s22010020. [PMID: 35009562 PMCID: PMC8747169 DOI: 10.3390/s22010020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/09/2021] [Accepted: 12/18/2021] [Indexed: 11/16/2022]
Abstract
To equip computers with human communication skills and to enable natural interaction between the computer and a human, intelligent solutions are required based on artificial intelligence (AI) methods, algorithms, and sensor technology. This study aimed at identifying and analyzing the state-of-the-art AI methods and algorithms and sensors technology in existing human-computer intelligent interaction (HCII) research to explore trends in HCII research, categorize existing evidence, and identify potential directions for future research. We conduct a systematic mapping study of the HCII body of research. Four hundred fifty-four studies published in various journals and conferences between 2010 and 2021 were identified and analyzed. Studies in the HCII and IUI fields have primarily been focused on intelligent recognition of emotion, gestures, and facial expressions using sensors technology, such as the camera, EEG, Kinect, wearable sensors, eye tracker, gyroscope, and others. Researchers most often apply deep-learning and instance-based AI methods and algorithms. The support sector machine (SVM) is the most widely used algorithm for various kinds of recognition, primarily an emotion, facial expression, and gesture. The convolutional neural network (CNN) is the often-used deep-learning algorithm for emotion recognition, facial recognition, and gesture recognition solutions.
Collapse
|
14
|
Li J, Ma W, Zhang M, Wang P, Liu Y, Ma S. Know Yourself: Physical and Psychological Self-Awareness With Lifelog. Front Digit Health 2021; 3:676824. [PMID: 34713147 PMCID: PMC8521907 DOI: 10.3389/fdgth.2021.676824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Accepted: 07/09/2021] [Indexed: 11/13/2022] Open
Abstract
Self-awareness is an essential concept in physiology and psychology. Accurate overall self-awareness benefits the development and well being of an individual. The previous research studies on self-awareness mainly collect and analyze data in the laboratory environment through questionnaires, user study, or field research study. However, these methods are usually not real-time and unavailable for daily life applications. Therefore, we propose a new direction of utilizing lifelog for self-awareness. Lifelog records about daily activities are used for analysis, prediction, and intervention on individual physical and psychological status, which can be automatically processed in real-time. With the help of lifelog, ordinary people are able to understand their condition more precisely, get effective personal advice about health, and even discover physical and mental abnormalities at an early stage. As the first step on using lifelog for self-awareness, we learn from the traditional machine learning problems, and summarize a schema on data collection, feature extraction, label tagging, and model learning in the lifelog scenario. The schema provides a flexible and privacy-protected method for lifelog applications. Following the schema, four topics were conducted: sleep quality prediction, personality detection, mood detection and prediction, and depression detection. Experiments on real datasets show encouraging results on these topics, revealing the significant relation between daily activity records and physical and psychological self-awareness. In the end, we discuss the experiment results and limitations in detail and propose an application, Lifelog Recorder, for multi-dimensional self-awareness lifelog data collection.
Collapse
Affiliation(s)
- Jiayu Li
- Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Weizhi Ma
- Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Min Zhang
- Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Pengyu Wang
- Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Yiqun Liu
- Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Shaoping Ma
- Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
15
|
Ságvári B, Gulyás A, Koltai J. Attitudes towards Participation in a Passive Data Collection Experiment. SENSORS 2021; 21:s21186085. [PMID: 34577291 PMCID: PMC8473380 DOI: 10.3390/s21186085] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 09/03/2021] [Accepted: 09/04/2021] [Indexed: 11/23/2022]
Abstract
In this paper, we present the results of an exploratory study conducted in Hungary using a factorial design-based online survey to explore the willingness to participate in a future research project based on active and passive data collection via smartphones. Recently, the improvement of smart devices has enabled the collection of behavioural data on a previously unimaginable scale. However, the willingness to share this data is a key issue for the social sciences and often proves to be the biggest obstacle to conducting research. In this paper we use vignettes to test different (hypothetical) study settings that involve sensor data collection but differ in the organizer of the research, the purpose of the study and the type of collected data, the duration of data sharing, the number of incentives and the ability to suspend and review the collection of data. Besides the demographic profile of respondents, we also include behavioural and attitudinal variables to the models. Our results show that the content and context of the data collection significantly changes people’s willingness to participate, however their basic demographic characteristics (apart from age) and general level of trust seem to have no significant effect. This study is a first step in a larger project that involves the development of a complex smartphone-based research tool for hybrid (active and passive) data collection. The results presented in this paper help improve our experimental design to encourage participation by minimizing data sharing concerns and maximizing user participation and motivation.
Collapse
Affiliation(s)
- Bence Ságvári
- Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary; (A.G.); (J.K.)
- Institute of Communication and Sociology, Corvinus University, Fővám tér 8, 1093 Budapest, Hungary
- Correspondence:
| | - Attila Gulyás
- Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary; (A.G.); (J.K.)
| | - Júlia Koltai
- Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary; (A.G.); (J.K.)
- Department of Network and Data Science, Central European University, Quellenstraße 51, 1100 Vienna, Austria
- Faculty of Social Sciences, Eötvös Loránd University of Sciences, Pázmány Péter Sétány 1/A, 1117 Budapest, Hungary
| |
Collapse
|
16
|
Petrescu L, Petrescu C, Oprea A, Mitruț O, Moise G, Moldoveanu A, Moldoveanu F. Machine Learning Methods for Fear Classification Based on Physiological Features. SENSORS 2021; 21:s21134519. [PMID: 34282759 PMCID: PMC8271969 DOI: 10.3390/s21134519] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/29/2021] [Accepted: 06/29/2021] [Indexed: 12/22/2022]
Abstract
This paper focuses on the binary classification of the emotion of fear, based on the physiological data and subjective responses stored in the DEAP dataset. We performed a mapping between the discrete and dimensional emotional information considering the participants’ ratings and extracted a substantial set of 40 types of features from the physiological data, which represented the input to various machine learning algorithms—Decision Trees, k-Nearest Neighbors, Support Vector Machine and artificial networks—accompanied by dimensionality reduction, feature selection and the tuning of the most relevant hyperparameters, boosting classification accuracy. The methodology we approached included tackling different situations, such as resolving the problem of having an imbalanced dataset through data augmentation, reducing overfitting, computing various metrics in order to obtain the most reliable classification scores and applying the Local Interpretable Model-Agnostic Explanations method for interpretation and for explaining predictions in a human-understandable manner. The results show that fear can be predicted very well (accuracies ranging from 91.7% using Gradient Boosting Trees to 93.5% using dimensionality reduction and Support Vector Machine) by extracting the most relevant features from the physiological data and by searching for the best parameters which maximize the machine learning algorithms’ classification scores.
Collapse
Affiliation(s)
- Livia Petrescu
- Faculty of Biology, University of Bucharest, 050095 Bucharest, Romania
- Correspondence:
| | - Cătălin Petrescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Ana Oprea
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Oana Mitruț
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Gabriela Moise
- Faculty of Letters and Sciences, Petroleum-Gas University of Ploiesti, 100680 Ploiesti, Romania;
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| | - Florica Moldoveanu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (C.P.); (A.O.); (O.M.); (A.M.); (F.M.)
| |
Collapse
|
17
|
Wosiak A, Dura A. Hybrid Method of Automated EEG Signals' Selection Using Reversed Correlation Algorithm for Improved Classification of Emotions. SENSORS 2020; 20:s20247083. [PMID: 33321895 PMCID: PMC7764031 DOI: 10.3390/s20247083] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 12/07/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022]
Abstract
Based on the growing interest in encephalography to enhance human-computer interaction (HCI) and develop brain-computer interfaces (BCIs) for control and monitoring applications, efficient information retrieval from EEG sensors is of great importance. It is difficult due to noise from the internal and external artifacts and physiological interferences. The enhancement of the EEG-based emotion recognition processes can be achieved by selecting features that should be taken into account in further analysis. Therefore, the automatic feature selection of EEG signals is an important research area. We propose a multistep hybrid approach incorporating the Reversed Correlation Algorithm for automated frequency band-electrode combinations selection. Our method is simple to use and significantly reduces the number of sensors to only three channels. The proposed method has been verified by experiments performed on the DEAP dataset. The obtained effects have been evaluated regarding the accuracy of two emotions-valence and arousal. In comparison to other research studies, our method achieved classification results that were 4.20-8.44% greater. Moreover, it can be perceived as a universal EEG signal classification technique, as it belongs to unsupervised methods.
Collapse
|