1
|
Singerman K, Kallenberger E, Humphrey C, Kriet JD, Flynn J. Cross-Facial Nerve Grafting Used Independently in Facial Reanimation: A Narrative Review. Facial Plast Surg Aesthet Med 2024. [PMID: 38946615 DOI: 10.1089/fpsam.2023.0288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2024] Open
Abstract
Importance: Cross-Facial Nerve Grafting (CFNG) for facial palsy offers potential to restore spontaneous facial expression, but specific indications and associated outcomes are limited. Updates to this technique have aided in its successful employment in select cases. This review aims to explore the context in which CFNG has been successfully utilized as a primary modality. Observations: Literature review was performed auditing all studies investigating CFNG as a primary modality, which reported outcomes. A total of 326 cases reporting outcomes for primary CFNG were included. Eye closure outcomes were 83.3% successful at ages 0-18, 77.3% successful at ages 19-40, and 57.1% successful at ages 41+. Smile outcomes were 73.7% successful at ages 0-18, 81.5% successful at ages 19-40, and 52.8% successful at ages 41+. For synkinesis, 89% of cases were considered successful; 100% successful at ages 0-18, and 78.4% successful in adults. Conclusions and Relevance: CFNG may offer return of spontaneous facial function in select cases. Higher percentages of successful outcomes are observed in younger patients, when performed in two stages, and when performed earlier from the onset of FP in cases of eye closure restoration. In the modern era, CFNG has been more commonly employed as an adjunctive procedure to other reanimation techniques.
Collapse
Affiliation(s)
- Kyle Singerman
- Department of Otolaryngology, Head and Neck Surgery, University of Kansas, Kansas City, Kansas, USA
| | - Ethan Kallenberger
- Department of Otolaryngology, Head and Neck Surgery, University of Kansas, Kansas City, Kansas, USA
- University of Kansas School of Medicine, Kansas City, Kansas, USA
| | - Clinton Humphrey
- Department of Otolaryngology, Head and Neck Surgery, University of Kansas, Kansas City, Kansas, USA
| | - J David Kriet
- Department of Otolaryngology, Head and Neck Surgery, University of Kansas, Kansas City, Kansas, USA
| | - John Flynn
- Department of Otolaryngology, Head and Neck Surgery, University of Kansas, Kansas City, Kansas, USA
| |
Collapse
|
2
|
Kollar B, Weiss JBW, Kiefer J, Eisenhardt SU. Functional Outcome of Dual Reinnervation with Cross-Facial Nerve Graft and Masseteric Nerve Transfer for Facial Paralysis. Plast Reconstr Surg 2024; 153:1178e-1190e. [PMID: 37384874 DOI: 10.1097/prs.0000000000010888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/01/2023]
Abstract
BACKGROUND The combination of cross-facial nerve graft (CFNG) and masseteric nerve transfer (MNT) for reinnervation of facial paralysis may provide advantages of both neural sources. However, quantitative functional outcome reports with a larger number of patients are lacking in the literature. Here, the authors describe their 8-year experience with this surgical technique. METHODS Twenty patients who presented with complete facial paralysis (duration, <12 months) received dual reinnervation with CFNG and MNT. The functional outcome of the procedure was evaluated with the physician-graded outcome metric eFACE scale. The objective artificial intelligence-driven software Emotrics and FaceReader were used for oral commissure measurements and emotional expression assessment, respectively. RESULTS The mean follow-up was 31.75 ± 23.32 months. In the eFACE score, the nasolabial fold depth and oral commissure at rest improved significantly ( P < 0.05) toward a more balanced state after surgery. Postoperatively, there was a significant decrease in oral commissure asymmetry while smiling (from 19.22 ± 6.1 mm to 12.19 ± 7.52 mm). For emotional expression, the median intensity score of happiness, as measured by the FaceReader software, increased significantly while smiling (0.28; interquartile range, 0.13 to 0.64). In five patients (25%), a secondary static midface suspension with fascia lata strip had to be performed because of unsatisfactory resting symmetry. Older patients and patients with greater preoperative resting asymmetry were more likely to receive static midface suspension. CONCLUSION The authors' results suggest that the combination of MNT and CFNG for reinnervation of facial paralysis provides good voluntary motion and may lessen the use of static midface suspension in the majority of patients. CLINICAL QUESTION/LEVEL OF EVIDENCE Therapeutic, IV.
Collapse
Affiliation(s)
- Branislav Kollar
- From the Department of Plastic and Hand Surgery, University of Freiburg Medical Center, Medical Faculty of the University of Freiburg
| | - Jakob B W Weiss
- From the Department of Plastic and Hand Surgery, University of Freiburg Medical Center, Medical Faculty of the University of Freiburg
| | - Jurij Kiefer
- From the Department of Plastic and Hand Surgery, University of Freiburg Medical Center, Medical Faculty of the University of Freiburg
| | - Steffen U Eisenhardt
- From the Department of Plastic and Hand Surgery, University of Freiburg Medical Center, Medical Faculty of the University of Freiburg
| |
Collapse
|
3
|
Zhu A, Boonipat T, Cherukuri S, Bite U. Defining Standard Values for FaceReader Facial Expression Software Output. Aesthetic Plast Surg 2024; 48:785-792. [PMID: 37460734 DOI: 10.1007/s00266-023-03468-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 06/10/2023] [Indexed: 04/01/2024]
Abstract
BACKGROUND FaceReader is a validated software package that uses computer vision technology for facial expression recognition which has become increasingly popular in academic research to expedite, scale, and decrease the cost of facial emotion analysis. In this study, we compare FaceReader analysis to human evaluator interpretation in order to define standard values for the software output. METHODS Randomly generated facial images produced by generative adversarial networks were analyzed using FaceReader and by survey participants (n=496). The age, facial emotion, and intensity of emotion as determined by the software and survey participants were recorded. Results were analyzed and compared. RESULTS 80 randomly generated images (20 children, 20 young adult, 20 middle aged, and 20 elderly; 38 male and 42 female) were included. Analysis of correlation between most common expression identified by FaceReader and the primary emotion detected by surveyors showed strong correlation (κ = 0.77, 95% CI = 0.64-0.91). On analyzing this correlation by age group, there was fair correlation in children (κ = 0.40, 95% CI = 0.078-0.72), perfect correlation in young adults (κ = 1.0, 95% CI = 1.0-1.0), strong correlation in middle aged adults (κ = 0.79, 95% CI = 0.53-1) and near perfect in elderly adults(κ = 0.9 , 95% CI = 0.7-1.0). CONCLUSIONS We provided the first study defining the expected average values generated by FaceReader in generally smiling images. This can be used as a standard in future studies. LEVEL OF EVIDENCE IV This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Agnes Zhu
- Mayo Clinic Alix School of Medicine, 200 1st ST. SW, Rochester, MN, 55905, USA.
| | | | - Sai Cherukuri
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Uldis Bite
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
4
|
Espinosa Reyes JA, Puerta Romero M, Cobo R, Heredia N, Solís Ruiz LA, Corredor Zuluaga DA. Artificial Intelligence in Facial Plastic and Reconstructive Surgery: A Systematic Review. Facial Plast Surg 2024. [PMID: 37992752 DOI: 10.1055/a-2216-5099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2023] Open
Abstract
Artificial intelligence (AI) is a technology that is evolving rapidly and is changing the world and medicine as we know it. After reviewing the PROSPERO database of systematic reviews, there is no article related to this topic in facial plastic and reconstructive surgery. The objective of this article was to review the literature regarding AI applications in facial plastic and reconstructive surgery.A systematic review of the literature about AI in facial plastic and reconstructive surgery using the following keywords: Artificial Intelligence, robotics, plastic surgery procedures, and surgery plastic and the following databases: PubMed, SCOPUS, Embase, BVS, and LILACS. The inclusion criteria were articles about AI in facial plastic and reconstructive surgery. Articles written in a language other than English and Spanish were excluded. In total, 17 articles about AI in facial plastic met the inclusion criteria; after eliminating the duplicated papers and applying the exclusion criteria, these articles were reviewed thoroughly. The leading type of AI used in these articles was computer vision, explicitly using models of convolutional neural networks to objectively compare the preoperative with the postoperative state in multiple interventions such as facial lifting and facial transgender surgery.In conclusion, AI is a rapidly evolving technology, and it could significantly impact the treatment of patients in facial plastic and reconstructive surgery. Legislation and regulations are developing slower than this technology. It is imperative to learn about this topic as soon as possible and that all stakeholders proactively promote discussions about ethical and regulatory dilemmas.
Collapse
Affiliation(s)
- Jorge Alberto Espinosa Reyes
- Department of Otolaryngology and Facial Plastic & Reconstructive Surgery, The Face & Nose Institute, Private Practice Clínica INO, Bogotá, DC, Colombia
| | - Mauricio Puerta Romero
- Department of Otolaryngology and Facial Plastic & Reconstructive Surgery, Private Practice Clínica Sebastían de Belalcázar, Cali, Valle del Cauca, Colombia
| | - Roxana Cobo
- Department of Otolaryngology and Facial Plastic & Reconstructive Surgery, The Face & Nose Institute, Private Practice at Clínica Imbanaco, Cali, Valle del Cauca Colombia
| | - Nicolas Heredia
- Department of Otolaryngology and Facial Plastic & Reconstructive Surgery, The Face & Nose Institute, Bogotá, D.C, Colombia
| | - Luis Alberto Solís Ruiz
- Department of Otolaryngology and Facial Plastic & Reconstructive Surgery, Private Practice, Chihuahua, Chihuahua, México
| | - Diego Andres Corredor Zuluaga
- Department of Otolaryngology and Facial Plastic & Reconstructive Surgery, Private Practice, Pereira, Risaralda, Colombia
| |
Collapse
|
5
|
TerKonda SP, TerKonda AA, Sacks JM, Kinney BM, Gurtner GC, Nachbar JM, Reddy SK, Jeffers LL. Artificial Intelligence: Singularity Approaches. Plast Reconstr Surg 2024; 153:204e-217e. [PMID: 37075274 DOI: 10.1097/prs.0000000000010572] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
SUMMARY Artificial intelligence (AI) has been a disruptive technology within health care, from the development of simple care algorithms to complex deep-learning models. AI has the potential to reduce the burden of administrative tasks, advance clinical decision-making, and improve patient outcomes. Unlocking the full potential of AI requires the analysis of vast quantities of clinical information. Although AI holds tremendous promise, widespread adoption within plastic surgery remains limited. Understanding the basics is essential for plastic surgeons to evaluate the potential uses of AI. This review provides an introduction of AI, including the history of AI, key concepts, applications of AI in plastic surgery, and future implications.
Collapse
Affiliation(s)
- Sarvam P TerKonda
- From the Division of Plastic and Reconstructive Surgery, Mayo Clinic Florida
| | - Anurag A TerKonda
- Division of Plastic and Reconstructive Surgery, Washington University School of Medicine in St. Louis
| | - Justin M Sacks
- Division of Plastic and Reconstructive Surgery, Washington University School of Medicine in St. Louis
| | - Brian M Kinney
- Division of Plastic Surgery, University of Southern California
| | - Geoff C Gurtner
- Division of Plastic and Reconstructive Surgery, Stanford University
| | | | | | | |
Collapse
|
6
|
Zhu A, Boonipat T, Cherukuri S, Lin J, Bite U. How Brow Rotation Affects Emotional Expression Utilizing Artificial Intelligence. Aesthetic Plast Surg 2023; 47:2552-2560. [PMID: 37626138 DOI: 10.1007/s00266-023-03615-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 08/12/2023] [Indexed: 08/27/2023]
Abstract
BACKGROUND It is well known that brow position affects emotional expression. However, there is little literature on how and to what degree this change in emotional expression happens. Previous studies on this topic have utilized manual rating; this method of study remains small and labor intensive. Our objective is to correlate manual brow rotations with emotional outcomes using artificial intelligence to objectively determine how specific brow manipulations affected human expression. METHODS We included 53 brow-lift patients in this study. Pre-operative patients' brows were rotated to - 20, - 10, +10, and +20 degrees in respect to the central axis of their existing brow using PIXLR, a cloud-based set of image editing tools and utilities. These images were analyzed using FaceReader, a validated software package that uses computer vision technology for facial expression recognition. The primary facial emotion and intensity of facial action units (0 = no action unit detected to 4 = most intense action unit detected) generated by the software were recorded. RESULTS 265 total images [5 images (pre-operative, - 20 degree brow rotation, - 10, +10, and +20) per patient] were analyzed using FaceReader. The primary emotion detected in the majority of images was neutral. The percentage of disgust in patients' expressions, as detected by FaceReader, increased with increased positive brow rotation (1.76% disgust detected at - 20 degrees, 2.09% at - 10 degrees, 2.65% at neutral, 2.61% at +10 degrees, and 2.95% at +20 degrees). In contrast, the percentage of sadness in patients' expressions decreased with increased positive brow rotation (29.92% sadness detected at - 20 degrees, 21.5% at - 10 degrees, 11.42% at neutral, 15.75% at +10 degrees, and 12.86% at +20 degrees). Our facial action unit analysis corresponded with primary emotion analysis. The intensity of the inner brow raiser decreased with increased positive brow rotation 8.54% at - 20 degrees, 4.21% at - 10 degrees, 1.48% at neutral, 0.84% at +10 degrees, and 0.76% at +20 degrees). The intensity of the outer brow raiser increased with increased positive brow rotation (0.97% at - 20 degrees, 0.45% at - 10 degrees, 1.12% at neutral, 5.45% at +10 degrees, and 11.19% at +20 degrees). CONCLUSION We demonstrated that increasing the degree of brow rotation correlated positively with the percentage of disgust and inversely with the percentage of sadness detected by FaceReader. This study demonstrated how different manipulated brow positions affected emotional outcomes using artificial intelligence. Physicians can use these findings to better understand how brow-lifts can affect the perceived emotion of their patients. LEVEL OF EVIDENCE III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Agnes Zhu
- Mayo Clinic Alix School of Medicine, Mayo Clinic Alix School of Medicine, 200 First St. SW, Rochester, MN, 55905, USA.
| | | | - Sai Cherukuri
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Jason Lin
- Division of Plastic and Reconstructive Surgery, Saint Louis University, St. Louis, MO, USA
| | - Uldis Bite
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
7
|
Atiyeh B, Emsieh S, Hakim C, Chalhoub R. A Narrative Review of Artificial Intelligence (AI) for Objective Assessment of Aesthetic Endpoints in Plastic Surgery. Aesthetic Plast Surg 2023; 47:2862-2873. [PMID: 37000298 DOI: 10.1007/s00266-023-03328-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/19/2023] [Indexed: 04/01/2023]
Abstract
Notoriously characterized by subjectivity and lack of solid scientific validation, reporting aesthetic outcome in plastic surgery is usually based on ill-defined end points and subjective measures very often from the patients' and/or providers' perspective. With the tremendous increase in demand for all types of aesthetic procedures, there is an urgent need for better understanding of aesthetics and beauty in addition to reliable and objective outcome measures to quantitate what is perceived as beautiful and attractive. In an era of evidence-based medicine, recognition of the importance of science with evidence-based approach to aesthetic surgery is long overdue. View the many limitations of conventional outcome evaluation tools of aesthetic interventions, objective outcome analysis provided by tools described to be reliable is being investigated such as advanced artificial intelligence (AI). The current review is intended to analyze available evidence regarding advantages as well as limitations of this technology in objectively documenting outcome of aesthetic interventions. It has shown that some AI applications such as facial emotions recognition systems are capable of objectively measuring and quantitating patients' reported outcomes and defining aesthetic interventions success from the patients' perspective. Though not reported yet, observers' satisfaction with the results and their appreciation of aesthetic attributes may also be measured in the same manner.Level of Evidence III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Bishara Atiyeh
- American University of Beirut Medical Center, Beirut, Lebanon
| | - Saif Emsieh
- American University of Beirut Medical Center, Beirut, Lebanon.
| | | | - Rawad Chalhoub
- American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
8
|
ElHawary H, Watt A, Chartier C, Gorgy A, Gilardino MS. Pocket Predictors: Are Smartphones the Future of Artificial Intelligence in Plastic Surgery. Plast Surg (Oakv) 2023; 31:415-416. [PMID: 37915351 PMCID: PMC10617461 DOI: 10.1177/22925503221078687] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023] Open
Affiliation(s)
- Hassan ElHawary
- Division of Plastic and Reconstructive Surgery, McGill University Health Centre, Montreal, Quebec, Canada
- Co-First Authors, contributed equally to this work
| | - Ayden Watt
- Department of Experimental Surgery, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
- Co-First Authors, contributed equally to this work
| | | | - Andrew Gorgy
- Division of Plastic and Reconstructive Surgery, McGill University Health Centre, Montreal, Quebec, Canada
| | - Mirko S. Gilardino
- Division of Plastic and Reconstructive Surgery, McGill University Health Centre, Montreal, Quebec, Canada
| |
Collapse
|
9
|
Yan Y, Lv C, Wang B, Wang X, Han W, Sun M, Kim BS, Zhang Y, Bao J, Lin L, Chai G. Applying artificial intelligence algorithm in the design of a guide plate for mandibular angle ostectomy. J Plast Reconstr Aesthet Surg 2023; 84:595-604. [PMID: 37451235 DOI: 10.1016/j.bjps.2023.05.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 02/23/2023] [Accepted: 05/15/2023] [Indexed: 07/18/2023]
Abstract
PURPOSE Surgical guide plates can improve the accuracy of surgery, although their design process is complex and time-consuming. This study aimed to use artificial intelligence (AI) to design standardized mandibular angle ostectomy guide plates and reduce clinician workload. METHODS An intelligence algorithm was designed and trained to design guide plates, with a safety-ensuring penalty factor added. A single-center retrospective cohort study was conducted to test the algorithm among patients who had visited our hospital between 2020 and 2021 for mandibular angle ostectomy. We included patients diagnosed with mandibular angle hypertrophy and excluded those combined with other facial malformations. The guide plate design method acted as the primary predictor, which was AI algorithm vs. experienced residents. Moreover, the symmetry of plate-guided ostectomy was chosen as the primary outcome. The safety, shape, location, effectiveness, and design duration of the guide plate were also recorded. The independent samples t-test and Pearson's chi-squared test were used and P-values < 0.05 were considered significant. RESULTS Fifty patients (7 men, 43 women; 27 ± 4 years) were included. The two groups differed significantly in terms of safety (7.02 vs. 5.25, P < 0.05) and design duration (24.98 vs. 1685.08, P < 0.05). The ostectomy symmetry and shape, location, and effectiveness of the guide plates did not differ significantly between the two groups. CONCLUSIONS The intelligent algorithm can improve safety and save time for guide plate design, ensuring other quality of the guide plates. It has good potential applicability in accurate mandibular angle ostectomy.
Collapse
Affiliation(s)
- Yingjie Yan
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| | - Chaofan Lv
- College of Mechanical Engineering, Donghua University, Shanghai 201620, China.
| | - Bingshun Wang
- Department of Biostatistics, Clinical Research Institute, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China.
| | - Xiaojin Wang
- Department of Biostatistics, Clinical Research Institute, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China.
| | - Wenqing Han
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| | - Mengzhe Sun
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| | - Byeong Seop Kim
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| | - Yan Zhang
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| | - Jinsong Bao
- College of Mechanical Engineering, Donghua University, Shanghai 201620, China.
| | - Li Lin
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| | - Gang Chai
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai 200011, China.
| |
Collapse
|
10
|
Ruiter AM, Wang Z, Yin Z, Naber WC, Simons J, Blom JT, van Gemert JC, Verschuuren JJGM, Tannemaat MR. Assessing facial weakness in myasthenia gravis with facial recognition software and deep learning. Ann Clin Transl Neurol 2023; 10:1314-1325. [PMID: 37292032 PMCID: PMC10424649 DOI: 10.1002/acn3.51823] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 05/23/2023] [Accepted: 05/24/2023] [Indexed: 06/10/2023] Open
Abstract
OBJECTIVE Myasthenia gravis (MG) is an autoimmune disease leading to fatigable muscle weakness. Extra-ocular and bulbar muscles are most commonly affected. We aimed to investigate whether facial weakness can be quantified automatically and used for diagnosis and disease monitoring. METHODS In this cross-sectional study, we analyzed video recordings of 70 MG patients and 69 healthy controls (HC) with two different methods. Facial weakness was first quantified with facial expression recognition software. Subsequently, a deep learning (DL) computer model was trained for the classification of diagnosis and disease severity using multiple cross-validations on videos of 50 patients and 50 controls. Results were validated using unseen videos of 20 MG patients and 19 HC. RESULTS Expression of anger (p = 0.026), fear (p = 0.003), and happiness (p < 0.001) was significantly decreased in MG compared to HC. Specific patterns of decreased facial movement were detectable in each emotion. Results of the DL model for diagnosis were as follows: area under the curve (AUC) of the receiver operator curve 0.75 (95% CI 0.65-0.85), sensitivity 0.76, specificity 0.76, and accuracy 76%. For disease severity: AUC 0.75 (95% CI 0.60-0.90), sensitivity 0.93, specificity 0.63, and accuracy 80%. Results of validation, diagnosis: AUC 0.82 (95% CI: 0.67-0.97), sensitivity 1.0, specificity 0.74, and accuracy 87%. For disease severity: AUC 0.88 (95% CI: 0.67-1.0), sensitivity 1.0, specificity 0.86, and accuracy 94%. INTERPRETATION Patterns of facial weakness can be detected with facial recognition software. Second, this study delivers a 'proof of concept' for a DL model that can distinguish MG from HC and classifies disease severity.
Collapse
Affiliation(s)
- Annabel M. Ruiter
- Department of NeurologyLeiden University Medical CenterLeidenthe Netherlands
| | - Ziqi Wang
- Vision LabDelft University of TechnologyDelftthe Netherlands
| | - Zhao Yin
- Vision LabDelft University of TechnologyDelftthe Netherlands
| | - Willemijn C. Naber
- Department of NeurologyLeiden University Medical CenterLeidenthe Netherlands
| | - Jerrel Simons
- Department of NeurologyLeiden University Medical CenterLeidenthe Netherlands
| | - Jurre T. Blom
- Medical Illustrator at www.jurreblom.nlApeldoornthe Netherlands
| | | | | | | |
Collapse
|
11
|
Woo SH, Kim YC, Kim J, Kwon S, Oh TS. Artificial intelligence-based numerical analysis of the quality of facial reanimation: A comparative retrospective cohort study between one-stage dual innervation and single innervation. J Craniomaxillofac Surg 2023:S1010-5182(23)00095-1. [PMID: 37353406 DOI: 10.1016/j.jcms.2023.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/26/2023] [Accepted: 05/23/2023] [Indexed: 06/25/2023] Open
Abstract
This study aimed to investigate the difference in facial reanimation surgery using functional gracilis muscle transfer between the masseteric nerve alone and its combined use with cross face nerve graft (CFNG), which has not been explored before. A novel analysis method based on artificial intelligence (AI) was employed to compare the outcomes of the two approaches. Using AI, 3-dimensional facial landmarks were extracted from 2-dimensional photographs, and distance and angular symmetry scores were calculated. The patients were divided into two groups, with Group 1 undergoing one-stage CFNG and masseteric nerve dual innervation, and Group 2 receiving only masseteric nerve. The symmetry scores were obtained before and 1 year after surgery to assess the degree of change. Of the 35 patients, Group 1 included 13 patients, and Group 2 included 22 patients. The analysis revealed that, in the resting state, the change in the symmetry score of the mouth corner showed distance symmetry (2.55 ± 2.94, 0.52 ± 2.75 for Group 1 and Group 2, respectively, p = 0.048) and angle symmetry (1.21 ± 1.43, 0.02 ± 0.22 for Group 1 and Group 2, respectively, p = 0.001), which were significantly improved in Group 1, indicating a more symmetric pattern after surgery. In the smile state, only the angle symmetry was improved more symmetrically in Group 1 (3.20 ± 2.38, 1.49 ± 2.22 for Group 1 and Group 2, respectively, p = 0.041). Within the limitations of the study it seems that this new analysis method enabled a more accurate numerical symmetry score to be obtained, and while the degree of mouth corner excursion was sufficient with only the masseteric nerve, accompanying CFNG led to further improvement in symmetry in the resting state.
Collapse
Affiliation(s)
- Soo Hyun Woo
- Department of Plastic Surgery, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, South Korea
| | - Young Chul Kim
- Department of Plastic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Junsik Kim
- Department of Electronic Engineering, Kwangwoon University, Seoul, South Korea
| | - Soonchul Kwon
- Graduate School of Smart Convergence, Kwangwoon University, Seoul, South Korea
| | - Tae Suk Oh
- Department of Plastic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
| |
Collapse
|
12
|
Wawer Matos PA, Reimer RP, Rokohl AC, Caldeira L, Heindl LM, Große Hokamp N. Artificial Intelligence in Ophthalmology - Status Quo and Future Perspectives. Semin Ophthalmol 2023; 38:226-237. [PMID: 36356300 DOI: 10.1080/08820538.2022.2139625] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Artificial intelligence (AI) is an emerging technology in healthcare and holds the potential to disrupt many arms in medical care. In particular, disciplines using medical imaging modalities, including e.g. radiology but ophthalmology as well, are already confronted with a wide variety of AI implications. In ophthalmologic research, AI has demonstrated promising results limited to specific diseases and imaging tools, respectively. Yet, implementation of AI in clinical routine is not widely spread due to availability, heterogeneity in imaging techniques and AI methods. In order to describe the status quo, this narrational review provides a brief introduction to AI ("what the ophthalmologist needs to know"), followed by an overview of different AI-based applications in ophthalmology and a discussion on future challenges.Abbreviations: Age-related macular degeneration, AMD; Artificial intelligence, AI; Anterior segment OCT, AS-OCT; Coronary artery calcium score, CACS; Convolutional neural network, CNN; Deep convolutional neural network, DCNN; Diabetic retinopathy, DR; Machine learning, ML; Optical coherence tomography, OCT; Retinopathy of prematurity, ROP; Support vector machine, SVM; Thyroid-associated ophthalmopathy, TAO.
Collapse
Affiliation(s)
| | - Robert P Reimer
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Alexander C Rokohl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Liliana Caldeira
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| | - Ludwig M Heindl
- Department of Ophthalmology, University Hospital of Cologne, Köln, Germany
| | - Nils Große Hokamp
- Department of Diagnostic and Interventional Radiology, University Hospital of Cologne, Köln, Germany
| |
Collapse
|
13
|
Hebel NSD, Boonipat T, Lin J, Shapiro D, Bite U. Artificial Intelligence in Surgical Evaluation: A Study of Facial Rejuvenation Techniques. Aesthet Surg J Open Forum 2023; 5:ojad032. [PMID: 37228317 PMCID: PMC10205049 DOI: 10.1093/asjof/ojad032] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023] Open
Abstract
Background Aesthetic facial surgeries historically rely on subjective analysis in determining success; this limits objective comparison of surgical outcomes. Objectives This case study exemplifies the use of an artificial intelligence software on objectively analyzing facial rejuvenation techniques with the aim of reducing subjective bias. Methods Retrospectively, all patients who underwent facial rejuvenation surgery with concomitant procedures from 2015 to 2017 were included (n = 32). Patients were categorized into Groups A to C: Group A-10 superficial musculoaponeurotic system (SMAS) plication facelift (n = 10), Group B-SMASectomy facelift (n = 7), and Group C-high SMAS facelift (n = 15). Neutral repose images preoperatively and postoperatively (average >3 months) were analyzed using artificial intelligence for emotion and action unit alterations. Results Postoperatively, Group A experienced a decrease in happiness by 0.84% and a decrease in anger by 6.87% (P >> .1). Group B had an increase in happiness by 0.77% and an increase in anger by 1.91% (P >> .1). Both Group A and Group B did not show any discernable action unit patterns. In Group C, the lip corner puller AU increased in average intensity from 0% to 18.7%. This correlated with an average increase in detected happiness from 1.03% to 13.17% (P = .008). Conversely, the average detected anger decreased from 14.66% to 0.63% (P = .032). Conclusions This study provides the first proof of concept for the use of a machine learning software application to objectively assess various aesthetic surgical outcomes in facial rejuvenation. Due to limitations in patient heterogeneity, this study does not claim one technique's superiority but serves as a conceptual foundation for future investigation. Level of Evidence 4
Collapse
Affiliation(s)
| | | | | | | | - Uldis Bite
- Corresponding Author: Dr Uldis Bite, Division of Plastic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA. E-mail:
| |
Collapse
|
14
|
Facial Recognition Software Use on Surgically Altered Faces: A Systematic Review. J Craniofac Surg 2022; 33:2443-2446. [PMID: 35968973 DOI: 10.1097/scs.0000000000008817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 05/02/2022] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE Facial recognition software (FRS) is becoming pervasive in society for commercial use, security systems, and entertainment. Alteration of the facial appearance with surgery poses a challenge to these algorithms, but several methods are being studied to overcome this issue. This study systematically reviews methods used in facial recognition of surgically altered faces. MATERIALS AND METHODS A systematic review was performed by searching PubMed and Institute of Electrical and Electronics Engineers (IEEE) databases to identify studies addressing FRS and surgery. On initial review, 178 manuscripts were identified relating to FRS and surgery and allowed division into multiple subgroups. The decision was made to focus on the recognition of surgically altered faces. RESULTS Eligible studies included those reports in English on FRS of surgically altered faces, and 39 papers were included. Surgical procedures range from affecting skin surface, such as skin peeling, to altering facial features, such as rhinoplasty, mentoplasty, malar augmentation, brow lift, facelift, orthognathic surgery, facial reanimation, and facial feminization. Methods were classified into appearance-based, feature-based, and texture-based. Descriptive versus experimental protocols were characterized by different reporting outcomes and controls. Accuracy ranged from 19.1% to 85.35% using various analysis methods. CONCLUSIONS Knowledge of available limitations and advantages can aid in counseling patients regarding personal technology use, security, and quell fears about surgery to evade authorities. Surgical knowledge can be utilized to improve FRS algorithms for postsurgical recognition.
Collapse
|
15
|
Review on Facial-Recognition-Based Applications in Disease Diagnosis. Bioengineering (Basel) 2022; 9:bioengineering9070273. [PMID: 35877324 PMCID: PMC9311612 DOI: 10.3390/bioengineering9070273] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 06/13/2022] [Accepted: 06/20/2022] [Indexed: 01/19/2023] Open
Abstract
Diseases not only manifest as internal structural and functional abnormalities, but also have facial characteristics and appearance deformities. Specific facial phenotypes are potential diagnostic markers, especially for endocrine and metabolic syndromes, genetic disorders, facial neuromuscular diseases, etc. The technology of facial recognition (FR) has been developed for more than a half century, but research in automated identification applied in clinical medicine has exploded only in the last decade. Artificial-intelligence-based FR has been found to have superior performance in diagnosis of diseases. This interdisciplinary field is promising for the optimization of the screening and diagnosis process and assisting in clinical evaluation and decision-making. However, only a few instances have been translated to practical use, and there is need of an overview for integration and future perspectives. This review mainly focuses on the leading edge of technology and applications in varieties of disease, and discusses implications for further exploration.
Collapse
|
16
|
Perceived Age and Attractiveness Using Facial Recognition Software in Rhinoplasty Patients. J Craniofac Surg 2022; 33:1540-1544. [DOI: 10.1097/scs.0000000000008625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 02/19/2022] [Indexed: 11/25/2022] Open
|
17
|
Using Artificial Intelligence to Measure Facial Expression following Facial Reanimation Surgery. Plast Reconstr Surg 2022; 149:593e-594e. [PMID: 35089270 DOI: 10.1097/prs.0000000000008866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Reply: Using Artificial Intelligence to Measure Facial Expression following Facial Reanimation Surgery. Plast Reconstr Surg 2022; 149:594e-595e. [PMID: 35089289 DOI: 10.1097/prs.0000000000008867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Takiddin A, Shaqfeh M, Boyaci O, Serpedin E, Stotland MA. Toward a Universal Measure of Facial Difference Using Two Novel Machine Learning Models. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2022; 10:e4034. [PMID: 35070595 PMCID: PMC8769118 DOI: 10.1097/gox.0000000000004034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 11/09/2021] [Indexed: 11/26/2022]
Abstract
A sensitive, objective, and universally accepted method of measuring facial deformity does not currently exist. Two distinct machine learning methods are described here that produce numerical scores reflecting the level of deformity of a wide variety of facial conditions. METHODS The first proposed technique utilizes an object detector based on a cascade function of Haar features. The model was trained using a dataset of 200,000 normal faces, as well as a collection of images devoid of faces. With the model trained to detect normal faces, the face detector confidence score was shown to function as a reliable gauge of facial abnormality. The second technique developed is based on a deep learning architecture of a convolutional autoencoder trained with the same rich dataset of normal faces. Because the convolutional autoencoder regenerates images disposed toward their training dataset (ie, normal faces), we utilized its reconstruction error as an indicator of facial abnormality. Scores generated by both methods were compared with human ratings obtained using a survey of 80 subjects evaluating 60 images depicting a range of facial deformities [rating from 1 (abnormal) to 7 (normal)]. RESULTS The machine scores were highly correlated to the average human score, with overall Pearson's correlation coefficient exceeding 0.96 (P < 0.00001). Both methods were computationally efficient, reporting results within 3 seconds. CONCLUSIONS These models show promise for adaptation into a clinically accessible handheld tool. It is anticipated that ongoing development of this technology will facilitate multicenter collaboration and comparison of outcomes between conditions, techniques, operators, and institutions.
Collapse
Affiliation(s)
- Abdulrahman Takiddin
- From the Electrical and Computer Engineering Department, Texas A&M University, College Station, Tex
| | - Mohammad Shaqfeh
- Electrical and Computer Engineering Department, Texas A&M University, Doha, Qatar
| | - Osman Boyaci
- From the Electrical and Computer Engineering Department, Texas A&M University, College Station, Tex
| | - Erchin Serpedin
- From the Electrical and Computer Engineering Department, Texas A&M University, College Station, Tex
| | - Mitchell A. Stotland
- Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, Doha, Qatar
- Weill Cornell Medical College, Doha, Qatar
| |
Collapse
|
20
|
Using Artificial Intelligence to Measure Facial Expression following Facial Reanimation Surgery. Plast Reconstr Surg 2021; 149:343e-345e. [PMID: 34965218 DOI: 10.1097/prs.0000000000008756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
21
|
Seminal Studies in Facial Reanimation Surgery: Consensus and Controversies in the Top 50 Most Cited Articles. J Craniofac Surg 2021; 33:1507-1513. [PMID: 34930875 DOI: 10.1097/scs.0000000000008436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 11/25/2021] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Facial paralysis can impair one's ability to form facial expressions that are congruent with internal emotion. This hinders communication and the cognitive processing of emotional experience. Facial reanimation surgery, which aims to restore full facial expressivity is a relatively recent undertaking which is still evolving. Due in large part to published techniques, refinements, and clinical outcomes in the scientific literature, consensus on best practice is gradually emerging, whereas controversies still exist.Taking stock of how the discipline reached its current state can help delineate areas of agreement and debate, and more clearly reveal a path forward. To do this, the authors have analyzed the 50 seminal publications pertaining to facial reanimation surgery. In longstanding cases, the free gracilis transfer emerges as a clear muscle of choice but the nerve selection remains controversial with prevailing philosophies advocating cross facial nerve grafts (with or without the support of an ipsilateral motor donor) or an ipsilateral motor donor only, of which the hypoglossal and nerve to masseter predominate. The alternative orthodoxy has refined the approach popularized by Gillies in 1934 and does not require the deployment of microsurgical principles. Although this citation analysis does not tell the whole story, surgeons with an interest in facial reanimation will find that this is a good place to start.
Collapse
|
22
|
Facial Recognition Neural Networks Confirm Success of Facial Feminization Surgery. Plast Reconstr Surg 2021; 147:354e-355e. [PMID: 33177465 DOI: 10.1097/prs.0000000000007562] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|