1
|
Echeverry-Quiceno LM, Candelo E, Gómez E, Solís P, Ramírez D, Ortiz D, González A, Sevillano X, Cuéllar JC, Pachajoa H, Martínez-Abadías N. Population-specific facial traits and diagnosis accuracy of genetic and rare diseases in an admixed Colombian population. Sci Rep 2023; 13:6869. [PMID: 37106005 PMCID: PMC10140286 DOI: 10.1038/s41598-023-33374-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 04/12/2023] [Indexed: 04/29/2023] Open
Abstract
Up to 40% of rare disorders (RD) present facial dysmorphologies, and visual assessment is commonly used for clinical diagnosis. Quantitative approaches are more objective, but mostly rely on European descent populations, disregarding diverse population ancestry. Here, we assessed the facial phenotypes of Down (DS), Morquio (MS), Noonan (NS) and Neurofibromatosis type 1 (NF1) syndromes in a Latino-American population, recording the coordinates of 18 landmarks in 2D images from 79 controls and 51 patients. We quantified facial differences using Euclidean Distance Matrix Analysis, and assessed the diagnostic accuracy of Face2Gene, an automatic deep-learning algorithm. Individuals diagnosed with DS and MS presented severe phenotypes, with 58.2% and 65.4% of significantly different facial traits. The phenotype was milder in NS (47.7%) and non-significant in NF1 (11.4%). Each syndrome presented a characteristic dysmorphology pattern, supporting the diagnostic potential of facial biomarkers. However, population-specific traits were detected in the Colombian population. Diagnostic accuracy was 100% in DS, moderate in NS (66.7%) but lower in comparison to a European population (100%), and below 10% in MS and NF1. Moreover, admixed individuals showed lower facial gestalt similarities. Our results underscore that incorporating populations with Amerindian, African and European ancestry is crucial to improve diagnostic methods of rare disorders.
Collapse
Affiliation(s)
- Luis M Echeverry-Quiceno
- Departament de Biologia Evolutiva, Ecologia i Ciències Ambientals (BEECA), Facultat de Biologia, Universitat de Barcelona (UB), Av. Diagonal, 643. Planta 2, 08028, Barcelona, Spain
| | - Estephania Candelo
- Centro de Investigaciones en Anomalías Congénitas y Enfermedades Raras (CIACER), Universidad ICESI, Cali, Colombia
- Servicio de Genética Clínica, Fundación Valle del Lili, Cali, Colombia
| | - Eidith Gómez
- Centro de Investigaciones en Anomalías Congénitas y Enfermedades Raras (CIACER), Universidad ICESI, Cali, Colombia
| | - Paula Solís
- Centro de Investigaciones en Anomalías Congénitas y Enfermedades Raras (CIACER), Universidad ICESI, Cali, Colombia
| | - Diana Ramírez
- Centro de Investigaciones en Anomalías Congénitas y Enfermedades Raras (CIACER), Universidad ICESI, Cali, Colombia
| | - Diana Ortiz
- Centro de Investigaciones en Anomalías Congénitas y Enfermedades Raras (CIACER), Universidad ICESI, Cali, Colombia
| | - Alejandro González
- HER - Human-Environment Research Group, La Salle - Universitat Ramon Llull, Barcelona, Spain
| | - Xavier Sevillano
- HER - Human-Environment Research Group, La Salle - Universitat Ramon Llull, Barcelona, Spain
| | | | - Harry Pachajoa
- Centro de Investigaciones en Anomalías Congénitas y Enfermedades Raras (CIACER), Universidad ICESI, Cali, Colombia
- Servicio de Genética Clínica, Fundación Valle del Lili, Cali, Colombia
| | - Neus Martínez-Abadías
- Departament de Biologia Evolutiva, Ecologia i Ciències Ambientals (BEECA), Facultat de Biologia, Universitat de Barcelona (UB), Av. Diagonal, 643. Planta 2, 08028, Barcelona, Spain.
| |
Collapse
|
2
|
Fan Y, Wang H, Zhu X, Cao X, Yi C, Chen Y, Jia J, Lu X. FER-PCVT: Facial Expression Recognition with Patch-Convolutional Vision Transformer for Stroke Patients. Brain Sci 2022; 12:1626. [PMID: 36552086 PMCID: PMC9776282 DOI: 10.3390/brainsci12121626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/19/2022] [Accepted: 11/23/2022] [Indexed: 11/29/2022] Open
Abstract
Early rehabilitation with the right intensity contributes to the physical recovery of stroke survivors. In clinical practice, physicians determine whether the training intensity is suitable for rehabilitation based on patients' narratives, training scores, and evaluation scales, which puts tremendous pressure on medical resources. In this study, a lightweight facial expression recognition algorithm is proposed to diagnose stroke patients' training motivations automatically. First, the properties of convolution are introduced into the Vision Transformer's structure, allowing the model to extract both local and global features of facial expressions. Second, the pyramid-shaped feature output mode in Convolutional Neural Networks is also introduced to reduce the model's parameters and calculation costs significantly. Moreover, a classifier that can better classify facial expressions of stroke patients is designed to improve performance further. We verified the proposed algorithm on the Real-world Affective Faces Database (RAF-DB), the Face Expression Recognition Plus Dataset (FER+), and a private dataset for stroke patients. Experiments show that the backbone network of the proposed algorithm achieves better performance than Pyramid Vision Transformer (PvT) and Convolutional Vision Transformer (CvT) with fewer parameters and Floating-point Operations Per Second (FLOPs). In addition, the algorithm reaches an 89.44% accuracy on the RAF-DB dataset, which is higher than other recent studies. In particular, it obtains an accuracy of 99.81% on the private dataset, with only 4.10M parameters.
Collapse
Affiliation(s)
- Yiming Fan
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Hewei Wang
- Department of Rehabilitation, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Xiaoyu Zhu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Xiangming Cao
- Department of Oncology, Jiangyin People’s Hospital Affiliated to Nantong University, Wuxi 214400, China
| | - Chuanjian Yi
- Department of Rehabilitation, The Affiliated Hospital of Qingdao University, Qingdao 266000, China
| | - Yao Chen
- Department of Rehabilitation, Shanghai Third Rehabilitation Hospital, Shanghai 200436, China
| | - Jie Jia
- Department of Rehabilitation, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Xiaofeng Lu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
- Wenzhou Institute, Shanghai University, Wenzhou 325000, China
| |
Collapse
|
3
|
Emotion Recognition of Down Syndrome People Based on the Evaluation of Artificial Intelligence and Statistical Analysis Methods. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This article presents a study based on evaluating different techniques to automatically recognize the basic emotions of people with Down syndrome, such as anger, happiness, sadness, surprise, and neutrality, as well as the statistical analysis of the Facial Action Coding System, determine the symmetry of the Action Units present in each emotion, identify the facial features that represent this group of people. First, a dataset of images of faces of people with Down syndrome classified according to their emotions is built. Then, the characteristics of facial micro-expressions (Action Units) present in the feelings of the target group through statistical analysis are evaluated. This analysis uses the intensity values of the most representative exclusive action units to classify people’s emotions. Subsequently, the collected dataset was evaluated using machine learning and deep learning techniques to recognize emotions. In the beginning, different supervised learning techniques were used, with the Support Vector Machine technique obtaining the best precision with a value of 66.20%. In the case of deep learning methods, the mini-Xception convolutional neural network was used to recognize people’s emotions with typical development, obtaining an accuracy of 74.8%.
Collapse
|
4
|
Matthews H, de Jong G, Maal T, Claes P. Static and Motion Facial Analysis for Craniofacial Assessment and Diagnosing Diseases. Annu Rev Biomed Data Sci 2022; 5:19-42. [PMID: 35440145 DOI: 10.1146/annurev-biodatasci-122120-111413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Deviation from a normal facial shape and symmetry can arise from numerous sources, including physical injury and congenital birth defects. Such abnormalities can have important aesthetic and functional consequences. Furthermore, in clinical genetics distinctive facial appearances are often associated with clinical or genetic diagnoses; the recognition of a characteristic facial appearance can substantially narrow the search space of potential diagnoses for the clinician. Unusual patterns of facial movement and expression can indicate disturbances to normal mechanical functioning or emotional affect. Computational analyses of static and moving 2D and 3D images can serve clinicians and researchers by detecting and describing facial structural, mechanical, and affective abnormalities objectively. In this review we survey traditional and emerging methods of facial analysis, including statistical shape modeling, syndrome classification, modeling clinical face phenotype spaces, and analysis of facial motion and affect. Expected final online publication date for the Annual Review of Biomedical Data Science, Volume 5 is August 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Harold Matthews
- Department of Human Genetics, KU Leuven, Leuven, Belgium; .,Medical Imaging Research Center, UZ Leuven, Leuven, Belgium.,Facial Sciences Research Group, Murdoch Children's Research Institute, Parkville, Australia
| | - Guido de Jong
- 3D Lab, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Thomas Maal
- 3D Lab, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Peter Claes
- Department of Human Genetics, KU Leuven, Leuven, Belgium; .,Medical Imaging Research Center, UZ Leuven, Leuven, Belgium.,Facial Sciences Research Group, Murdoch Children's Research Institute, Parkville, Australia.,Processing Speech and Images (PSI), Department of Electrical Engineering (ESAT), KU Leuven, Leuven, Belgium
| |
Collapse
|
5
|
Wang CW, Peng CC. 3D Face Point Cloud Reconstruction and Recognition Using Depth Sensor. SENSORS 2021; 21:s21082587. [PMID: 33917034 PMCID: PMC8067758 DOI: 10.3390/s21082587] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 03/28/2021] [Accepted: 04/02/2021] [Indexed: 11/23/2022]
Abstract
Facial recognition has attracted more and more attention since the rapid growth of artificial intelligence (AI) techniques in recent years. However, most of the related works about facial reconstruction and recognition are mainly based on big data collection and image deep learning related algorithms. The data driven based AI approaches inevitably increase the computational complexity of CPU and usually highly count on GPU capacity. One of the typical issues of RGB-based facial recognition is its applicability in low light or dark environments. To solve this problem, this paper presents an effective procedure for facial reconstruction as well as facial recognition via using a depth sensor. For each testing candidate, the depth camera acquires a multi-view of its 3D point clouds. The point cloud sets are stitched for 3D model reconstruction by using the iterative closest point (ICP). Then, a segmentation procedure is designed to separate the model set into a body part and head part. Based on the segmented 3D face point clouds, certain facial features are then extracted for recognition scoring. Taking a single shot from the depth sensor, the point cloud data is going to register with other 3D face models to determine which is the best candidate the data belongs to. By using the proposed feature-based 3D facial similarity score algorithm, which composes of normal, curvature, and registration similarities between different point clouds, the person can be labeled correctly even in a dark environment. The proposed method is suitable for smart devices such as smart phones and smart pads with tiny depth camera equipped. Experiments with real-world data show that the proposed method is able to reconstruct denser models and achieve point cloud-based 3D face recognition.
Collapse
|