1
|
Hayajneh A, Serpedin E, Shaqfeh M, Glass G, Stotland MA. Adapting a style based generative adversarial network to create images depicting cleft lip deformity. Sci Rep 2025; 15:3614. [PMID: 39875471 PMCID: PMC11775284 DOI: 10.1038/s41598-025-86588-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 01/13/2025] [Indexed: 01/30/2025] Open
Abstract
Training a machine learning system to evaluate any type of facial deformity is impeded by the scarcity of large datasets of high-quality, ethics board-approved patient images. We have built a deep learning-based cleft lip generator called CleftGAN designed to produce an almost unlimited number of high-fidelity facsimiles of cleft lip facial images with wide variation. A transfer learning protocol testing different versions of StyleGAN as the base model was undertaken. Data augmentation maneuvers permitted input of merely 514 frontal photographs of cleft-affected faces adapted to a base model of 70,000 normal faces. The Frechet Inception Distance was used to measure the similarity of the newly generated facial images to the cleft training dataset. Perceptual Path Length and the novel Divergence Index of Normality measures also assessed the performance of the novel image generator. CleftGAN generates vast numbers of unique faces depicting a wide range of cleft lip deformity with variation of ethnic background. Performance metrics demonstrated a high similarity of the generated images to our training dataset and a smooth, semantically valid interpolation of images through the transfer learning process. The distribution of normality for the training and generated images were highly comparable. CleftGAN is a novel instrument that generates an almost boundless number of realistic facial images depicting cleft lip. This tool promises to become a valuable resource for the development of machine learning models to objectively evaluate facial form and the outcomes of surgical reconstruction.
Collapse
Affiliation(s)
- Abdullah Hayajneh
- Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, USA
| | - Erchin Serpedin
- Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, USA
| | - Mohammad Shaqfeh
- Electrical and Computer Engineering Program, Texas A&M University, Doha, Qatar
| | - Graeme Glass
- Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, and Weill Cornell Medical College, C1-121, Al Gharrafa St, Ar Rayyan, Doha, Qatar
| | - Mitchell A Stotland
- Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, and Weill Cornell Medical College, C1-121, Al Gharrafa St, Ar Rayyan, Doha, Qatar.
| |
Collapse
|
2
|
Atiyeh B, Emsieh S, Hakim C, Chalhoub R. A Narrative Review of Artificial Intelligence (AI) for Objective Assessment of Aesthetic Endpoints in Plastic Surgery. Aesthetic Plast Surg 2023; 47:2862-2873. [PMID: 37000298 DOI: 10.1007/s00266-023-03328-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/19/2023] [Indexed: 04/01/2023]
Abstract
Notoriously characterized by subjectivity and lack of solid scientific validation, reporting aesthetic outcome in plastic surgery is usually based on ill-defined end points and subjective measures very often from the patients' and/or providers' perspective. With the tremendous increase in demand for all types of aesthetic procedures, there is an urgent need for better understanding of aesthetics and beauty in addition to reliable and objective outcome measures to quantitate what is perceived as beautiful and attractive. In an era of evidence-based medicine, recognition of the importance of science with evidence-based approach to aesthetic surgery is long overdue. View the many limitations of conventional outcome evaluation tools of aesthetic interventions, objective outcome analysis provided by tools described to be reliable is being investigated such as advanced artificial intelligence (AI). The current review is intended to analyze available evidence regarding advantages as well as limitations of this technology in objectively documenting outcome of aesthetic interventions. It has shown that some AI applications such as facial emotions recognition systems are capable of objectively measuring and quantitating patients' reported outcomes and defining aesthetic interventions success from the patients' perspective. Though not reported yet, observers' satisfaction with the results and their appreciation of aesthetic attributes may also be measured in the same manner.Level of Evidence III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Bishara Atiyeh
- American University of Beirut Medical Center, Beirut, Lebanon
| | - Saif Emsieh
- American University of Beirut Medical Center, Beirut, Lebanon.
| | | | - Rawad Chalhoub
- American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
3
|
Atiyeh B, Emsieh S, Hakim C, Chalhoub R, Habal M. A Narrative Review of Eye-Tracking Assessment of Esthetic Endpoints in Plastic, Reconstructive, and Craniofacial Surgery. J Craniofac Surg 2023; 34:2137-2141. [PMID: 37590000 DOI: 10.1097/scs.0000000000009578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 05/31/2023] [Indexed: 08/18/2023] Open
Abstract
ABSTRACT Reporting of esthetic outcomes in plastic surgery relies classically on ill-defined endpoints and subjective measures very often from the patients' and/or providers' perspectives that are notoriously characterized by subjectivity and questionable solid scientific validation. With the recent trend of increasing demand for all types of esthetic medical and surgical interventions, there is an urgent need for reliable and objective outcome measures to quantitate esthetic outcomes and determine the efficacy of these interventions. The current review is intended to analyze available evidence regarding advantages as well as limitations of eye-tracking (ET) technology in objectively documenting esthetic outcomes of plastic, reconstructive, and craniofacial interventions. Although gaze pattern analysis is gaining more attention, ET data should be interpreted with caution; how a specific visual stimulus directly influences one's sense of esthetics is still not clear. Furthermore, despite its great potentials, it is still too early to confirm or deny ET usefulness. Nevertheless, patient-reported outcomes being most indicative of an esthetic intervention success, measurement of patients' satisfaction by ET technology could offer a major breakthrough in objective assessment of esthetic outcomes that need further in-depth investigation. EVIDENCE LEVEL Level III.
Collapse
Affiliation(s)
- Bishara Atiyeh
- American University of Beirut Medical Center, Beirut, Lebanon
| | | | | | | | | |
Collapse
|
4
|
Hayajneh A, Shaqfeh M, Serpedin E, Stotland MA. Unsupervised anomaly appraisal of cleft faces using a StyleGAN2-based model adaptation technique. PLoS One 2023; 18:e0288228. [PMID: 37535557 PMCID: PMC10399833 DOI: 10.1371/journal.pone.0288228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 06/22/2023] [Indexed: 08/05/2023] Open
Abstract
A novel machine learning framework that is able to consistently detect, localize, and measure the severity of human congenital cleft lip anomalies is introduced. The ultimate goal is to fill an important clinical void: to provide an objective and clinically feasible method of gauging baseline facial deformity and the change obtained through reconstructive surgical intervention. The proposed method first employs the StyleGAN2 generative adversarial network with model adaptation to produce a normalized transformation of 125 faces, and then uses a pixel-wise subtraction approach to assess the difference between all baseline images and their normalized counterparts (a proxy for severity of deformity). The pipeline of the proposed framework consists of the following steps: image preprocessing, face normalization, color transformation, heat-map generation, morphological erosion, and abnormality scoring. Heatmaps that finely discern anatomic anomalies visually corroborate the generated scores. The proposed framework is validated through computer simulations as well as by comparison of machine-generated versus human ratings of facial images. The anomaly scores yielded by the proposed computer model correlate closely with human ratings, with a calculated Pearson's r score of 0.89. The proposed pixel-wise measurement technique is shown to more closely mirror human ratings of cleft faces than two other existing, state-of-the-art image quality metrics (Learned Perceptual Image Patch Similarity and Structural Similarity Index). The proposed model may represent a new standard for objective, automated, and real-time clinical measurement of faces affected by congenital cleft deformity.
Collapse
Affiliation(s)
- Abdullah Hayajneh
- Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, United States of America
| | - Mohammad Shaqfeh
- Electrical and Computer Engineering Program, Texas A&M University, Doha, Qatar
| | - Erchin Serpedin
- Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, United States of America
| | - Mitchell A Stotland
- Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, and Weill Cornell Medical College, Doha, Qatar
| |
Collapse
|
5
|
Takiddin A, Shaqfeh M, Boyaci O, Serpedin E, Stotland M. Gauging Facial Abnormality Using Haar-Cascade Object Detector. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1448-1451. [PMID: 36086585 DOI: 10.1109/embc48229.2022.9871337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The overriding clinical and academic challenge that inspires this work is the lack of a universally accepted, objective, and feasible method of measuring facial deformity; and, by extension, the lack of a reliable means of assessing the benefits and shortcomings of craniofacial surgical interventions. We propose a machine learning-based method to create a scale of facial deformity by producing numerical scores that reflect the level of deformity. An object detector that is constructed using a cascade function of Haar features has been trained with a rich dataset of normal faces in addition to a collection of images that does not contain faces. After that, the confidence score of the face detector was used as a gauge of facial abnormality. The scores were compared with a benchmark that is based on human appraisals obtained using a survey of a range of facial deformities. Interestingly, the overall Pearson's correlation coefficient of the machine scores with respect to the average human score exceeded 0.96.
Collapse
|
6
|
Takiddin A, Shaqfeh M, Boyaci O, Serpedin E, Stotland MA. Toward a Universal Measure of Facial Difference Using Two Novel Machine Learning Models. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2022; 10:e4034. [PMID: 35070595 PMCID: PMC8769118 DOI: 10.1097/gox.0000000000004034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 11/09/2021] [Indexed: 11/26/2022]
Abstract
A sensitive, objective, and universally accepted method of measuring facial deformity does not currently exist. Two distinct machine learning methods are described here that produce numerical scores reflecting the level of deformity of a wide variety of facial conditions. METHODS The first proposed technique utilizes an object detector based on a cascade function of Haar features. The model was trained using a dataset of 200,000 normal faces, as well as a collection of images devoid of faces. With the model trained to detect normal faces, the face detector confidence score was shown to function as a reliable gauge of facial abnormality. The second technique developed is based on a deep learning architecture of a convolutional autoencoder trained with the same rich dataset of normal faces. Because the convolutional autoencoder regenerates images disposed toward their training dataset (ie, normal faces), we utilized its reconstruction error as an indicator of facial abnormality. Scores generated by both methods were compared with human ratings obtained using a survey of 80 subjects evaluating 60 images depicting a range of facial deformities [rating from 1 (abnormal) to 7 (normal)]. RESULTS The machine scores were highly correlated to the average human score, with overall Pearson's correlation coefficient exceeding 0.96 (P < 0.00001). Both methods were computationally efficient, reporting results within 3 seconds. CONCLUSIONS These models show promise for adaptation into a clinically accessible handheld tool. It is anticipated that ongoing development of this technology will facilitate multicenter collaboration and comparison of outcomes between conditions, techniques, operators, and institutions.
Collapse
Affiliation(s)
- Abdulrahman Takiddin
- From the Electrical and Computer Engineering Department, Texas A&M University, College Station, Tex
| | - Mohammad Shaqfeh
- Electrical and Computer Engineering Department, Texas A&M University, Doha, Qatar
| | - Osman Boyaci
- From the Electrical and Computer Engineering Department, Texas A&M University, College Station, Tex
| | - Erchin Serpedin
- From the Electrical and Computer Engineering Department, Texas A&M University, College Station, Tex
| | - Mitchell A. Stotland
- Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, Doha, Qatar
- Weill Cornell Medical College, Doha, Qatar
| |
Collapse
|
7
|
Dagli MM, Rajesh A, Asaad M, Butler CE. The Use of Artificial Intelligence and Machine Learning in Surgery: A Comprehensive Literature Review. Am Surg 2021:31348211065101. [PMID: 34958252 DOI: 10.1177/00031348211065101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Interest in the use of artificial intelligence (AI) and machine learning (ML) in medicine has grown exponentially over the last few years. With its ability to enhance speed, precision, and efficiency, AI has immense potential, especially in the field of surgery. This article aims to provide a comprehensive literature review of artificial intelligence as it applies to surgery and discuss practical examples, current applications, and challenges to the adoption of this technology. Furthermore, we elaborate on the utility of natural language processing and computer vision in improving surgical outcomes, research, and patient care.
Collapse
Affiliation(s)
| | - Aashish Rajesh
- Department of Surgery, 14742University of Texas Health Science Center, San Antonio, TX, USA
| | - Malke Asaad
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Charles E Butler
- Department of Plastic & Reconstructive Surgery, 571198the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|