1
|
Li Z, Zhou L, Bin X, Tan S, Tan Z, Tang A. Utility of deep learning for the diagnosis of cochlear malformation on temporal bone CT. Jpn J Radiol 2024; 42:261-267. [PMID: 37812304 DOI: 10.1007/s11604-023-01494-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 09/19/2023] [Indexed: 10/10/2023]
Abstract
OBJECTIVE Diagnosis of cochlear malformation on temporal bone CT images is often difficult. Our aim was to assess the utility of deep learning analysis in diagnosing cochlear malformation on temporal bone CT images. METHODS A total of 654 images from 165 temporal bone CTs were divided into the training set (n = 534) and the testing set (n = 120). A target region that includes the area of the cochlear was extracted to create a diagnostic model. 4 models were used: ResNet10, ResNet50, SE-ResNet50, and DenseNet121. The testing data set was subsequently analyzed using these models and by 4 doctors. RESULTS The areas under the curve was 0.91, 0.94, 0.93, and 0.73 in ResNet10, ResNet50, SE-ResNet50, and DenseNet121. The accuracy of ResNet10, ResNet50, and SE-ResNet50 is better than chief physician. CONCLUSIONS Deep learning technique implied a promising prospect for clinical application of artificial intelligence in the diagnosis of cochlear malformation based on CT images.
Collapse
Affiliation(s)
- Zhenhua Li
- Department of Otorhinolaryngology-Head and Neck Surgery, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, Hunan, People's Republic of China
| | - Langtao Zhou
- School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou, People's Republic of China
| | - Xiang Bin
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Songhua Tan
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China
| | - Zhiqiang Tan
- Department of Otorhinolaryngology-Head and Neck Surgery, Hunan Provincial People's Hospital, The First Affiliated Hospital of Hunan Normal University, Changsha, Hunan, People's Republic of China
| | - Anzhou Tang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, People's Republic of China.
- Clinical Teaching Building, Guangxi Medical University, Nanning, 530000, People's Republic of China.
| |
Collapse
|
2
|
Ren LJ, Luo F, Yang ZW, Chen LL, Wang XY, Li CL, Xie YZ, Wang JM, Zhang TY, Wang S, Fu YY. A publicly available newborn ear shape dataset for medical diagnosis of auricular deformities. Sci Data 2024; 11:13. [PMID: 38167545 PMCID: PMC10762036 DOI: 10.1038/s41597-023-02834-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024] Open
Abstract
Early and accurate diagnosis of ear deformities in newborns is crucial for an effective non-surgical correction treatment, since this commonly seen ear anomalies would affect aesthetics and cause mental problems if untreated. It is not easy even for experienced physicians to diagnose the auricular deformities of newborns and the classification of the sub-types, because of the rich bio-metric features embedded in the ear shape. Machine learning has already been introduced to analyze the auricular shape. However, there is little publicly available datasets of ear images from newborns. We released a dataset that contains quality-controlled photos of 3,852 ears from 1,926 newborns. The dataset also contains medical diagnosis of the ear shape, and the health data of each newborn and its mother. Our aim is to provide a freely accessible dataset, which would facilitate researches related with ear anatomies, such as the AI-aided detection and classification of auricular deformities and medical risk analysis.
Collapse
Affiliation(s)
- Liu-Jie Ren
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China
| | - Fei Luo
- Obstetrics & Gynecology Hospital, Fudan University, Shanghai, China
| | - Zhi-Wei Yang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| | - Li-Li Chen
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Xin-Yue Wang
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Chen-Long Li
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China
| | - You-Zhou Xie
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China
| | - Ji-Mei Wang
- Obstetrics & Gynecology Hospital, Fudan University, Shanghai, China
| | - Tian-Yu Zhang
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China.
| | - Shuo Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China.
- Academy for Engineering & Technology, Fudan University, Shanghai, China.
| | - Yao-Yao Fu
- FPRS Department/ENT Institute, Eye and ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai, China.
| |
Collapse
|
3
|
TerKonda SP, TerKonda AA, Sacks JM, Kinney BM, Gurtner GC, Nachbar JM, Reddy SK, Jeffers LL. Artificial Intelligence: Singularity Approaches. Plast Reconstr Surg 2024; 153:204e-217e. [PMID: 37075274 DOI: 10.1097/prs.0000000000010572] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/21/2023]
Abstract
SUMMARY Artificial intelligence (AI) has been a disruptive technology within health care, from the development of simple care algorithms to complex deep-learning models. AI has the potential to reduce the burden of administrative tasks, advance clinical decision-making, and improve patient outcomes. Unlocking the full potential of AI requires the analysis of vast quantities of clinical information. Although AI holds tremendous promise, widespread adoption within plastic surgery remains limited. Understanding the basics is essential for plastic surgeons to evaluate the potential uses of AI. This review provides an introduction of AI, including the history of AI, key concepts, applications of AI in plastic surgery, and future implications.
Collapse
Affiliation(s)
- Sarvam P TerKonda
- From the Division of Plastic and Reconstructive Surgery, Mayo Clinic Florida
| | - Anurag A TerKonda
- Division of Plastic and Reconstructive Surgery, Washington University School of Medicine in St. Louis
| | - Justin M Sacks
- Division of Plastic and Reconstructive Surgery, Washington University School of Medicine in St. Louis
| | - Brian M Kinney
- Division of Plastic Surgery, University of Southern California
| | - Geoff C Gurtner
- Division of Plastic and Reconstructive Surgery, Stanford University
| | | | | | | |
Collapse
|
4
|
Sayadi JJ, Arora JS, Chattopadhyay A, Hopkins E, Quiter A, Khosla RK. A Retrospective Review of Outcomes and Complications after Infant Ear Molding at a Single Institution. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2023; 11:e5133. [PMID: 37636327 PMCID: PMC10448938 DOI: 10.1097/gox.0000000000005133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/06/2023] [Indexed: 08/29/2023]
Abstract
Background The purpose of this study was to evaluate outcomes and complications associated with infant ear molding at a single institution. Methods We conducted a retrospective chart review of all infants who underwent ear molding using the EarWell Infant Ear Correction System with pediatric plastic surgery from October 2010 to March 2021. Types of ear anomalies, age at initiation, duration of treatment, gaps in treatment, comorbidities, and complications were extracted for included patients. The primary outcomes assessed were degree of ear anomaly correction and incidence of skin complications. Parents were also sent a questionnaire regarding their long-term satisfaction with the ear molding treatment process. Results A total of 184 ears of 114 patients meeting inclusion criteria were treated during the study period. Mean age at treatment initiation was 21 days, and average duration of treatment was 40 days. Helical rim deformities (N = 50 ears) and lop ear (N = 40 ears) were the most common anomalies. A total of 181 ears (98.4%) achieved either a complete (N = 125 ears, 67.9%) or partial correction (N = 56 ears, 30.4%). The most common complications were eczematous dermatitis (N = 27 occurrences among 25 ears, 13.6%) and pressure ulcers (N = 23 occurrences among 21 ears, 12.5%). Infants who experienced a complication were 3.36 times more likely to achieve partial relative to complete correction (P < 0.001; 95% confidence interval 1.66-6.81). Conclusion Ear molding is an effective treatment strategy for infant ear anomalies, with most patients achieving complete correction.
Collapse
Affiliation(s)
- Jamasb J. Sayadi
- From the Division of Plastic and Reconstructive Surgery, Stanford University School of Medicine, Stanford, Calif
- Lucile Packard Children’s Hospital, Stanford Children’s Health, Palo Alto, Calif
| | - Jagmeet S. Arora
- University of California, Irvine School of Medicine, Irvine, Calif
| | - Arhana Chattopadhyay
- From the Division of Plastic and Reconstructive Surgery, Stanford University School of Medicine, Stanford, Calif
- Lucile Packard Children’s Hospital, Stanford Children’s Health, Palo Alto, Calif
| | - Elena Hopkins
- Lucile Packard Children’s Hospital, Stanford Children’s Health, Palo Alto, Calif
| | - Alison Quiter
- Lucile Packard Children’s Hospital, Stanford Children’s Health, Palo Alto, Calif
| | - Rohit K. Khosla
- From the Division of Plastic and Reconstructive Surgery, Stanford University School of Medicine, Stanford, Calif
- Lucile Packard Children’s Hospital, Stanford Children’s Health, Palo Alto, Calif
| |
Collapse
|
5
|
Hsu SY, Chen LW, Huang RW, Tsai TY, Hung SY, Cheong DCF, Lu JCY, Chang TNJ, Huang JJ, Tsao CK, Lin CH, Chuang DCC, Wei FC, Kao HK. Quantization of extraoral free flap monitoring for venous congestion with deep learning integrated iOS applications on smartphones: a diagnostic study. Int J Surg 2023; 109:1584-1593. [PMID: 37055021 PMCID: PMC10389505 DOI: 10.1097/js9.0000000000000391] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/28/2023] [Indexed: 04/15/2023]
Abstract
BACKGROUND Free flap monitoring is essential for postmicrosurgical management and outcomes but traditionally relies on human observers; the process is subjective and qualitative and imposes a heavy burden on staffing. To scientifically monitor and quantify the condition of free flaps in a clinical scenario, we developed and validated a successful clinical transitional deep learning (DL) model integrated application. MATERIAL AND METHODS Patients from a single microsurgical intensive care unit between 1 April 2021 and 31 March 2022, were retrospectively analyzed for DL model development, validation, clinical transition, and quantification of free flap monitoring. An iOS application that predicted the probability of flap congestion based on computer vision was developed. The application calculated probability distribution that indicates the flap congestion risks. Accuracy, discrimination, and calibration tests were assessed for model performance evaluations. RESULTS From a total of 1761 photographs of 642 patients, 122 patients were included during the clinical application period. Development (photographs =328), external validation (photographs =512), and clinical application (photographs =921) cohorts were assigned to corresponding time periods. The performance measurements of the DL model indicate a 92.2% training and a 92.3% validation accuracy. The discrimination (area under the receiver operating characteristic curve) was 0.99 (95% CI: 0.98-1.0) during internal validation and 0.98 (95% CI: 0.97-0.99) under external validation. Among clinical application periods, the application demonstrates 95.3% accuracy, 95.2% sensitivity, and 95.3% specificity. The probabilities of flap congestion were significantly higher in the congested group than in the normal group (78.3 (17.1)% versus 13.2 (18.1)%; 0.8%; 95% CI, P <0.001). CONCLUSION The DL integrated smartphone application can accurately reflect and quantify flap condition; it is a convenient, accurate, and economical device that can improve patient safety and management and assist in monitoring flap physiology.
Collapse
Affiliation(s)
- Shao-Yun Hsu
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, Georgia, USA
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | | | - Ren-Wen Huang
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
- Division of Traumatic Plastic Surgery, Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | | | - Shao-Yu Hung
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - David Chon-Fok Cheong
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Johnny Chuieng-Yi Lu
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Tommy Nai-Jen Chang
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Jung-Ju Huang
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Chung-Kan Tsao
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Chih-Hung Lin
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - David Chwei-Chin Chuang
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Fu-Chan Wei
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| | - Huang-Kai Kao
- Division of Reconstructive Microsurgery, Department of Plastic and Reconstructive Surgery
- Department of Plastic and Reconstructive Surgery, Chang Gung Memorial Hospital, Linkou
- College of Medicine, Chang Gung University
| |
Collapse
|
6
|
Plonkowski AT, Breakey RWF, Read JCA, Sainsbury DCG. The Use of Eye-tracking Technology in Cleft Lip: A Literature Review. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2023; 11:e4980. [PMID: 37360237 PMCID: PMC10287128 DOI: 10.1097/gox.0000000000004980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/17/2023] [Indexed: 06/28/2023]
Abstract
Eye-tracking has become an increasingly popular research tool within the field of cleft lip and/or palate (CL+/-P). Despite this, there are no standardized protocols for conducting research. Our objective was to conduct a literature review of the methodology and outcomes of previous publications using eye-tracking in CL+/-P. Methods The PubMed, Google Scholar, and Cochrane databases were searched to identify all articles published up to August 2022. All articles were screened by two independent reviewers. Inclusion criteria included using eye-tracking, image stimuli of CL+/-P, and outcome reporting using areas of interest (AOIs). Exclusion criteria included non-English studies, conference articles, and image stimuli of conditions other than CL+/-P. Results Forty articles were identified, and 16 met the inclusion/exclusion criteria. Thirteen studies only displayed images of individuals following cleft lip surgery with three only displaying unrepaired cleft lips. Significant variation was found in study design, particularly in the AOIs used to report gaze outcomes. Ten studies asked participants to provide an outcome score alongside eye-tracking; however, only four compared outcome data to eye-tracking data. This review is primarily limited by the minimal number of publications in this area. Conclusions Eye-tracking can be a powerful tool in evaluating appearance outcomes following CL+/-P surgery. It is currently limited by the lack of standardized research methodology and varied study design. Before future work, a replicable protocol should be developed to maximize the potential of this technology.
Collapse
Affiliation(s)
- Alexander T. Plonkowski
- From the School of Medicine, Newcastle University, Newcastle upon Tyne, United Kingdom
- Cleft Lip and Palate Service, Royal Victoria Infirmary, Newcastle upon Tyne, United Kingdom
| | - R. William F. Breakey
- Cleft Lip and Palate Service, Royal Victoria Infirmary, Newcastle upon Tyne, United Kingdom
| | - Jenny C. A. Read
- Department of Vision Science, Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - David C. G. Sainsbury
- Cleft Lip and Palate Service, Royal Victoria Infirmary, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
7
|
Mao S, Wu X, Hou M, Mei L, Feng Y, Song J. Research and application progress in deep learning in otology. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2023; 48:463-471. [PMID: 37164930 PMCID: PMC10930069 DOI: 10.11817/j.issn.1672-7347.2023.210588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Indexed: 05/12/2023]
Abstract
With the optimization of deep learning algorithms and the accumulation of medical big data, deep learning technology has been widely applied in research in various fields of otology in recent years. At present, research on deep learning in otology is combined with a variety of data such as endoscopy, temporal bone images, audiograms, and intraoperative images, which involves diagnosis of otologic diseases (including auricular malformations, external auditory canal diseases, middle ear diseases, and inner ear diseases), treatment (guiding medication and surgical planning), and prognosis prediction (involving hearing regression and speech learning). According to the type of data and the purpose of the study (disease diagnosis, treatment and prognosis), the different neural network models can be used to take advantage of their algorithms, and the deep learning can be a good aid in treating otologic diseases. The deep learning has a good applicable prospect in the clinical diagnosis and treatment of otologic diseases, which can play a certain role in promoting the development of deep learning combined with intelligent medicine.
Collapse
Affiliation(s)
- Shuang Mao
- Department of Otorhinolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha 410008.
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008.
- National Clinical Research Center for Geriatric Diseases (Xiangya Hospital), Changsha 410008.
| | - Xuewen Wu
- Department of Otorhinolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha 410008
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008
- National Clinical Research Center for Geriatric Diseases (Xiangya Hospital), Changsha 410008
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha 410083
| | - Lingyun Mei
- Department of Otorhinolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha 410008
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008
- National Clinical Research Center for Geriatric Diseases (Xiangya Hospital), Changsha 410008
| | - Yong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha 410008
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008
- Department of Otorhinolaryngology Head and Neck Surgery, Changsha Central Hospital Affiliated to South China University, Changsha 410018, China
| | - Jian Song
- Department of Otorhinolaryngology Head and Neck Surgery, Xiangya Hospital, Central South University, Changsha 410008.
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008.
- National Clinical Research Center for Geriatric Diseases (Xiangya Hospital), Changsha 410008.
| |
Collapse
|
8
|
Bécue A, Champod C. Interpol review of fingermarks and other body impressions 2019 - 2022). Forensic Sci Int Synerg 2022; 6:100304. [PMID: 36636235 PMCID: PMC9830181 DOI: 10.1016/j.fsisyn.2022.100304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
9
|
Wang D, Chen X, Wu Y, Tang H, Deng P. Artificial intelligence for assessing the severity of microtia via deep convolutional neural networks. Front Surg 2022; 9:929110. [PMID: 36157410 PMCID: PMC9492961 DOI: 10.3389/fsurg.2022.929110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/23/2022] [Indexed: 11/21/2022] Open
Abstract
Background Microtia is a congenital abnormality varying from slightly structural abnormalities to the complete absence of the external ear. However, there is no gold standard for assessing the severity of microtia. Objectives The purpose of this study was to develop and test models of artificial intelligence to assess the severity of microtia using clinical photographs. Methods A total of 800 ear images were included, and randomly divided into training, validation, and test set. Nine convolutional neural networks (CNNs) were trained for classifying the severity of microtia. The evaluation metrics, including accuracy, precision, recall, F1 score, receiver operating characteristic curve, and area under the curve (AUC) values, were used to evaluate the performance of the models. Results Eight CNNs were tested with accuracy greater than 0.8. Among them, Alexnet and Mobilenet achieved the highest accuracy of 0.9. Except for Mnasnet, all CNNs achieved high AUC values higher than 0.9 for each grade of microtia. In most CNNs, the grade I microtia had the lowest AUC values and the normal ear had the highest AUC values. Conclusion CNN can classify the severity of microtia with high accuracy. Artificial intelligence is expected to provide an objective, automated assessment of the severity of microtia.
Collapse
Affiliation(s)
| | | | | | | | - Pei Deng
- Correspondence: Pei Deng Hongbo Tang
| |
Collapse
|
10
|
Su R, Song J, Wang Z, Mao S, Mao Y, Wu X, Hou M. Application of high resolution computed tomography image assisted classification model of middle ear diseases based on 3D-convolutional neural network. ZHONG NAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF CENTRAL SOUTH UNIVERSITY. MEDICAL SCIENCES 2022; 47:1037-1048. [PMID: 36097771 PMCID: PMC10950109 DOI: 10.11817/j.issn.1672-7347.2022.210704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES Chronic suppurative otitis media (CSOM) and middle ear cholesteatoma (MEC) are the 2 most common chronic middle ear diseases. In the process of diagnosis and treatment, the 2 diseases are prone to misdiagnosis and missed diagnosis due to their similar clinical manifestations. High resolution computed tomography (HRCT) can clearly display the fine anatomical structure of the temporal bone, accurately reflect the middle ear lesions and the extent of the lesions, and has advantages in the differential diagnosis of chronic middle ear diseases. This study aims to develop a deep learning model for automatic information extraction and classification diagnosis of chronic middle ear diseases based on temporal bone HRCT image data to improve the classification and diagnosis efficiency of chronic middle ear diseases in clinical practice and reduce the occurrence of missed diagnosis and misdiagnosis. METHODS The clinical records and temporal bone HRCT imaging data for patients with chronic middle ear diseases hospitalized in the Department of Otorhinolaryngology, Xiangya Hospital from January 2018 to October 2020 were retrospectively collected. The patient's medical records were independently reviewed by 2 experienced otorhinolaryngologist and the final diagnosis was reached a consensus. A total of 499 patients (998 ears) were enrolled in this study. The 998 ears were divided into 3 groups: an MEC group (108 ears), a CSOM group (622 ears), and a normal group (268 ears). The Gaussian noise with different variances was used to amplify the samples of the dataset to offset the imbalance in the number of samples between groups. The sample size of the amplified experimental dataset was 1 806 ears. In the study, 75% (1 355) samples were randomly selected for training, 10% (180) samples for validation, and the remaining 15% (271) samples for testing and evaluating the model performance. The overall design for the model was a serial structure, and the deep learning model with 3 different functions was set up. The first model was the regional recommendation network algorithm, which searched the middle ear image from the whole HRCT image, and then cut and saved the image. The second model was image contrast convolutional neural network (CNN) based on twin network structure, which searched the images matching the key layers of HRCT images from the cut images, and constructed 3D data blocks. The third model was based on 3D-CNN operation, which was used for the final classification and diagnosis of the 3D data block construction, and gave the final prediction probability. RESULTS The special level search network based on twin network structure showed an average AUC of 0.939 on 10 special levels. The overall accuracy of the classification network based on 3D-CNN was 96.5%, the overall recall rate was 96.4%, and the average AUC under the 3 classifications was 0.983. The recall rates of CSOM cases and MEC cases were 93.7% and 97.4%, respectively. In the subsequent comparison experiments, the average accuracy of some classical CNN was 79.3%, and the average recall rate was 87.6%. The precision rate and the recall rate of the deep learning network constructed in this study were about 17.2% and 8.8% higher than those of the common CNN. CONCLUSIONS The deep learning network model proposed in this study can automatically extract 3D data blocks containing middle ear features from the HRCT image data of patients' temporal bone, which can reduce the overall size of the data while preserve the relationship between corresponding images, and further use 3D-CNN for classification and diagnosis of CSOM and MEC. The design of this model is well fitting to the continuous characteristics of HRCT data, and the experimental results show high precision and adaptability, which is better than the current common CNN methods.
Collapse
Affiliation(s)
- Ri Su
- School of Mathematics and Statistics, Central South University, Changsha 410083.
| | - Jian Song
- Department of Otorhinolaryngology, Xiangya Hospital, Central South University, Changsha 410008.
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008.
- National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008.
| | - Zheng Wang
- School of Mathematics and Statistics, Central South University, Changsha 410083
| | - Shuang Mao
- Department of Otorhinolaryngology, Xiangya Hospital, Central South University, Changsha 410008
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008
- National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008
| | - Yitao Mao
- Department of Imaging, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Xuewen Wu
- Department of Otorhinolaryngology, Xiangya Hospital, Central South University, Changsha 410008
- Hunan Provincial Key Laboratory of Major Otorhinolaryngology Diseases, Changsha 410008
- National Clinical Research Center for Geriatric Diseases, Xiangya Hospital, Changsha 410008
| | - Muzhou Hou
- School of Mathematics and Statistics, Central South University, Changsha 410083
| |
Collapse
|
11
|
Ear Biometrics Using Deep Learning: A Survey. APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2022. [DOI: 10.1155/2022/9692690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
This paper explores ear biometrics using a mixture of feature extraction techniques and classifies this feature vector using deep learning with convolutional neural network. This exploration of ear biometrics uses images from 2D facial profiles and facial images. The investigated feature techniques are Zernike Moments, local binary pattern, Gabor filter, and Haralick texture moments. The normalised feature vector is used to examine whether deep learning using convolutional neural network is better at identifying the ear than other commonly used machine learning techniques. The widely used machine learning techniques that were used to compare them are decision tree, naïve Bayes, K-nearest neighbors (KNN), and support vector machine (SVM). This paper proved that using a bag of feature techniques and the classification technique of deep learning using convolutional neural network was better than standard machine learning techniques. The result achieved by the deep learning using convolutional neural network was 92.00% average ear identification rate for both left and right ears.
Collapse
|
12
|
Basu S, Agarwal R, Srivastava V. Deep discriminative learning model with calibrated attention map for the automated diagnosis of diffuse large B-cell lymphoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Sun P, Wang C, Huang X, Pan B. A novel method to accurately locate the reconstructed auricle. Transl Pediatr 2022; 11:487-494. [PMID: 35558970 PMCID: PMC9085952 DOI: 10.21037/tp-21-453] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 01/28/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Congenital microtia is a common congenital disease in children, the cause of which is still unclear. At present, the main treatment for congenital microtia is ear reconstruction. Accurately locating of the reconstructed ear on the affected side before ear reconstruction surgery is difficult, while it is the key of successful operation. Our ear reconstruction team has developed a novel method to accurately locate the reconstructed auricle. This novel method has achieved good results in clinical practice. METHODS Thirty patients with unilateral ear reconstruction, who underwent auricle reconstruction using our invented auricle reconstruction positioning method in the Plastic Surgery Hospital of Chinese Academy of Medical Sciences from January 2020 to July 2021, were enrolled in this study. RESULTS Through Wilcoxon signed rank test, we found that there was no statistical difference between the mean distance from the highest point of the patient's normal ear to the central axis of the nose and that from the highest point of the reconstructed ear to the central axis of the nose (P>0.05). Meanwhile, there was no statistical difference between the mean distance from the lowest point of the patient's normal ear to the central axis of the nose and that from the lowest point of the reconstructed ear to the central axis of the nose (P>0.05). The satisfaction rate of patients and their families to the location of the reconstructed auricle was 100%. CONCLUSIONS The novel method of locating the reconstructed auricle employs simple materials. The implementation process is easy, and the effect is significant. To a certain extent, it solves the difficulty of locating the reconstructed auricle in ear reconstruction operation. Although this method can only be applied to patients with unilateral microtia, we recommend it for locating the reconstructed auricle by every plastic surgeon.
Collapse
Affiliation(s)
- Pengfei Sun
- Department of Auricular Reconstruction, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Changchen Wang
- Department of Auricular Reconstruction, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xin Huang
- Department of Auricular Reconstruction, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bo Pan
- Department of Auricular Reconstruction, Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
14
|
Islam MN, Sulaiman N, Farid FA, Uddin J, Alyami SA, Rashid M, P.P. Abdul Majeed A, Moni MA. Diagnosis of hearing deficiency using EEG based AEP signals: CWT and improved-VGG16 pipeline. PeerJ Comput Sci 2021; 7:e638. [PMID: 34712786 PMCID: PMC8507488 DOI: 10.7717/peerj-cs.638] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 06/21/2021] [Indexed: 05/14/2023]
Abstract
Hearing deficiency is the world's most common sensation of impairment and impedes human communication and learning. Early and precise hearing diagnosis using electroencephalogram (EEG) is referred to as the optimum strategy to deal with this issue. Among a wide range of EEG control signals, the most relevant modality for hearing loss diagnosis is auditory evoked potential (AEP) which is produced in the brain's cortex area through an auditory stimulus. This study aims to develop a robust intelligent auditory sensation system utilizing a pre-train deep learning framework by analyzing and evaluating the functional reliability of the hearing based on the AEP response. First, the raw AEP data is transformed into time-frequency images through the wavelet transformation. Then, lower-level functionality is eliminated using a pre-trained network. Here, an improved-VGG16 architecture has been designed based on removing some convolutional layers and adding new layers in the fully connected block. Subsequently, the higher levels of the neural network architecture are fine-tuned using the labelled time-frequency images. Finally, the proposed method's performance has been validated by a reputed publicly available AEP dataset, recorded from sixteen subjects when they have heard specific auditory stimuli in the left or right ear. The proposed method outperforms the state-of-art studies by improving the classification accuracy to 96.87% (from 57.375%), which indicates that the proposed improved-VGG16 architecture can significantly deal with AEP response in early hearing loss diagnosis.
Collapse
Affiliation(s)
- Md Nahidul Islam
- Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Norizam Sulaiman
- Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Fahmid Al Farid
- Faculty of Computing and Informatics, Multimedia University, Malaysia
| | - Jia Uddin
- Technology Studies Department, Endicott College, Woosong university, Daejeon, South Korea
| | - Salem A. Alyami
- Department of Mathematics and Statistics, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
| | - Mamunur Rashid
- Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Anwar P.P. Abdul Majeed
- Innovative Manufacturing, Mechatronics and Sports Laboratory, Faculty of Manufacturing and Mechatronic Engineering Technology, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
- Centre for Software Development & Integrated Computing, Universiti Malaysia Pahang, Pekan, Pahang, Malaysia
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland St Lucia, Australia
| |
Collapse
|
15
|
Hallac RR, Jackson SA, Grant J, Fisher K, Scheiwe S, Wetz E, Perez J, Lee J, Chitta K, Seaward JR, Kane AA. Assessing outcomes of ear molding therapy by health care providers and convolutional neural network. Sci Rep 2021; 11:17875. [PMID: 34504194 PMCID: PMC8429730 DOI: 10.1038/s41598-021-97310-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 08/17/2021] [Indexed: 12/05/2022] Open
Abstract
Ear molding therapy is a nonsurgical technique to correct certain congenital auricular deformities. While the advantages of nonsurgical treatments over otoplasty are well-described, few studies have assessed aesthetic outcomes. In this study, we compared assessments of outcomes of ear molding therapy for 283 ears by experienced healthcare providers and a previously developed deep learning CNN model. 2D photographs of ears were obtained as a standard of care in our onsite photography studio. Physician assistants (PAs) rated the photographs using a 5-point Likert scale ranging from 1(poor) to 5(excellent) and the CNN assessment was categorical, classifying each photo as either “normal” or “deformed”. On average, the PAs classified 75.6% of photographs as good to excellent outcomes (scores 4 and 5). Similarly, the CNN classified 75.3% of the photographs as normal. The inter-rater agreement between the PAs ranged between 72 and 81%, while there was a 69.6% agreement between the machine model and the inter-rater majority agreement between at least two PAs (i.e., when at least two PAs gave a simultaneous score < 4 or ≥ 4). This study shows that noninvasive ear molding therapy has excellent outcomes in general. In addition, it indicates that with further training and validation, machine learning techniques, like CNN, have the capability to accurately mimic provider assessment while removing the subjectivity of human evaluation making it a robust tool for ear deformity identification and outcome evaluation.
Collapse
Affiliation(s)
- Rami R Hallac
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA. .,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, 1935 Medical District Dr., Dallas, TX, 75235, USA.
| | - Sarah A Jackson
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Jessica Grant
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Kaylyn Fisher
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Sarah Scheiwe
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Elizabeth Wetz
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Jeyna Perez
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Jeon Lee
- Department of Bioinformatics, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Krishna Chitta
- Department of Bioinformatics, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - James R Seaward
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA
| | - Alex A Kane
- Department of Plastic Surgery, UT Southwestern, 5323 Harry Hines Blvd, Dallas, TX, 75390, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, 1935 Medical District Dr., Dallas, TX, 75235, USA
| |
Collapse
|
16
|
Mantelakis A, Assael Y, Sorooshian P, Khajuria A. Machine Learning Demonstrates High Accuracy for Disease Diagnosis and Prognosis in Plastic Surgery. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2021; 9:e3638. [PMID: 34235035 PMCID: PMC8225366 DOI: 10.1097/gox.0000000000003638] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 04/14/2021] [Indexed: 01/25/2023]
Abstract
INTRODUCTION Machine learning (ML) is a set of models and methods that can detect patterns in vast amounts of data and use this information to perform various kinds of decision-making under uncertain conditions. This review explores the current role of this technology in plastic surgery by outlining the applications in clinical practice, diagnostic and prognostic accuracies, and proposed future direction for clinical applications and research. METHODS EMBASE, MEDLINE, CENTRAL and ClinicalTrials.gov were searched from 1990 to 2020. Any clinical studies (including case reports) which present the diagnostic and prognostic accuracies of machine learning models in the clinical setting of plastic surgery were included. Data collected were clinical indication, model utilised, reported accuracies, and comparison with clinical evaluation. RESULTS The database identified 1181 articles, of which 51 articles were included in this review. The clinical utility of these algorithms was to assist clinicians in diagnosis prediction (n=22), outcome prediction (n=21) and pre-operative planning (n=8). The mean accuracy is 88.80%, 86.11% and 80.28% respectively. The most commonly used models were neural networks (n=31), support vector machines (n=13), decision trees/random forests (n=10) and logistic regression (n=9). CONCLUSIONS ML has demonstrated high accuracies in diagnosis and prognostication of burn patients, congenital or acquired facial deformities, and in cosmetic surgery. There are no studies comparing ML to clinician's performance. Future research can be enhanced using larger datasets or utilising data augmentation, employing novel deep learning models, and applying these to other subspecialties of plastic surgery.
Collapse
Affiliation(s)
| | | | | | - Ankur Khajuria
- Kellogg College, University of Oxford
- Department of Surgery and Cancer, Imperial College London, UK
| |
Collapse
|
17
|
A Role for Artificial Intelligence in the Classification of Craniofacial Anomalies. J Craniofac Surg 2021; 32:967-969. [PMID: 33405463 DOI: 10.1097/scs.0000000000007369] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
ABSTRACT Development of an objective algorithm to diagnose and assess craniofacial conditions has the potential to facilitate early diagnosis, especially for care providers with limited craniofacial expertise. Deep learning, a branch of artificial intelligence, can automatically analyze and categorize disease without human assistance. Convolutional neural networks (CNN) have excelled in utilizing medical images to automatically classify disease. In this study, the authors developed CNN models to detect and classify non-syndromic craniosynostosis (CS) using 2D images. The authors created an annotated data set of labeled CS (normal, metopic, sagittal, and unicoronal) conditions using standard clinical photography from the image repository at our center. The authors extended this dataset set by adding photographic images of children with craniofacial conditions from the internet. A total of 1076 images were used in this study. The authors developed a CNN model using a pre-trained ResNet-50 model to classify the data as metopic, sagittal, and unicoronal. The testing accuracy for the CS ResNet50 model achieved an overall testing accuracy of 90.6%. The sensitivity and precision were: 100% and 100% for metopic, 93.3% and 100% for sagittal, and 66.7% and 100% for unicoronal, respectively. The CNN model performed with promising accuracy. These results support the idea that deep learning has a role in diagnosis of craniofacial conditions. Using standard 2D clinical photography, such systems can provide automated screening and detection of these conditions. In the future, ML may be applied to prediction and assessment of surgical outcomes, or as an open-source remote diagnostic resource.
Collapse
|
18
|
Deep learning based prediction of extraction difficulty for mandibular third molars. Sci Rep 2021; 11:1954. [PMID: 33479379 PMCID: PMC7820274 DOI: 10.1038/s41598-021-81449-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 01/07/2021] [Indexed: 11/08/2022] Open
Abstract
This paper proposes a convolutional neural network (CNN)-based deep learning model for predicting the difficulty of extracting a mandibular third molar using a panoramic radiographic image. The applied dataset includes a total of 1053 mandibular third molars from 600 preoperative panoramic radiographic images. The extraction difficulty was evaluated based on the consensus of three human observers using the Pederson difficulty score (PDS). The classification model used a ResNet-34 pretrained on the ImageNet dataset. The correlation between the PDS values determined by the proposed model and those measured by the experts was calculated. The prediction accuracies for C1 (depth), C2 (ramal relationship), and C3 (angulation) were 78.91%, 82.03%, and 90.23%, respectively. The results confirm that the proposed CNN-based deep learning model could be used to predict the difficulty of extracting a mandibular third molar using a panoramic radiographic image.
Collapse
|
19
|
|