1
|
Li Z, Wang Y, Chen K, Qiang W, Zong X, Ding K, Wang S, Yin S, Jiang J, Chen W. Promoting smartphone-based keratitis screening using meta-learning: A multicenter study. J Biomed Inform 2024; 157:104722. [PMID: 39244181 DOI: 10.1016/j.jbi.2024.104722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 08/01/2024] [Accepted: 09/02/2024] [Indexed: 09/09/2024]
Abstract
OBJECTIVE Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs. METHODS We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models. RESULTS With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models. CONCLUSIONS CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Yangyang Wang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Kuan Chen
- Department of Ophthalmology, Cangnan Hospital, Wenzhou Medical University, Wenzhou 325000, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Xihang Zong
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Ke Ding
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Shihong Wang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Shiqi Yin
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
2
|
Li Z, Xie H, Wang Z, Li D, Chen K, Zong X, Qiang W, Wen F, Deng Z, Chen L, Li H, Dong H, Wu P, Sun T, Cheng Y, Yang Y, Xue J, Zheng Q, Jiang J, Chen W. Deep learning for multi-type infectious keratitis diagnosis: A nationwide, cross-sectional, multicenter study. NPJ Digit Med 2024; 7:181. [PMID: 38971902 PMCID: PMC11227533 DOI: 10.1038/s41746-024-01174-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 06/21/2024] [Indexed: 07/08/2024] Open
Abstract
The main cause of corneal blindness worldwide is keratitis, especially the infectious form caused by bacteria, fungi, viruses, and Acanthamoeba. The key to effective management of infectious keratitis hinges on prompt and precise diagnosis. Nevertheless, the current gold standard, such as cultures of corneal scrapings, remains time-consuming and frequently yields false-negative results. Here, using 23,055 slit-lamp images collected from 12 clinical centers nationwide, this study constructed a clinically feasible deep learning system, DeepIK, that could emulate the diagnostic process of a human expert to identify and differentiate bacterial, fungal, viral, amebic, and noninfectious keratitis. DeepIK exhibited remarkable performance in internal, external, and prospective datasets (all areas under the receiver operating characteristic curves > 0.96) and outperformed three other state-of-the-art algorithms (DenseNet121, InceptionResNetV2, and Swin-Transformer). Our study indicates that DeepIK possesses the capability to assist ophthalmologists in accurately and swiftly identifying various infectious keratitis types from slit-lamp images, thereby facilitating timely and targeted treatment.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Daoyuan Li
- Department of Ophthalmology, The Affiliated Hospital of Guizhou Medical University, Guiyang, 550004, China
| | - Kuan Chen
- Department of Ophthalmology, Cangnan Hospital, Wenzhou Medical University, Wenzhou, 325000, China
| | - Xihang Zong
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Feng Wen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Zhihong Deng
- Department of Ophthalmology, The Third Xiangya Hospital, Central South University, Changsha, 410013, China
| | - Limin Chen
- Department of Ophthalmology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350000, China
| | - Huiping Li
- Department of Ophthalmology, People's Hospital of Ningxia Hui Autonomous Region, Ningxia Medical University, Yinchuan, 750001, China
| | - He Dong
- The Third People's Hospital of Dalian & Dalian Municipal Eye Hospital, Dalian, 116033, China
| | - Pengcheng Wu
- Department of Ophthalmology, The Second Hospital of Lanzhou University, Lanzhou, 730030, China
| | - Tao Sun
- The Affiliated Eye Hospital of Nanchang University, Jiangxi Clinical Research Center for Ophthalmic Disease, Jiangxi Research Institute of Ophthalmology and Visual Science, Jiangxi Provincial Key Laboratory for Ophthalmology, Nanchang, 330006, China
| | - Yan Cheng
- Xi'an No.1 Hospital, Shaanxi Institute of Ophthalmology, Shaanxi Key Laboratory of Ophthalmology, The First Affiliated Hospital of Northwestern University, Xi'an, 710002, China
| | - Yanning Yang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, 430060, China
| | - Jinsong Xue
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, 210029, China
| | - Qinxiang Zheng
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China.
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
3
|
Yan C, Zhang Z, Zhang G, Liu H, Zhang R, Liu G, Rao J, Yang W, Sun B. An ensemble deep learning diagnostic system for determining Clinical Activity Scores in thyroid-associated ophthalmopathy: integrating multi-view multimodal images from anterior segment slit-lamp photographs and facial images. Front Endocrinol (Lausanne) 2024; 15:1365350. [PMID: 38628586 PMCID: PMC11019375 DOI: 10.3389/fendo.2024.1365350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 02/27/2024] [Indexed: 04/19/2024] Open
Abstract
Background Thyroid-associated ophthalmopathy (TAO) is the most prevalent autoimmune orbital condition, significantly impacting patients' appearance and quality of life. Early and accurate identification of active TAO along with timely treatment can enhance prognosis and reduce the occurrence of severe cases. Although the Clinical Activity Score (CAS) serves as an effective assessment system for TAO, it is susceptible to assessor experience bias. This study aimed to develop an ensemble deep learning system that combines anterior segment slit-lamp photographs of patients with facial images to simulate expert assessment of TAO. Method The study included 156 patients with TAO who underwent detailed diagnosis and treatment at Shanxi Eye Hospital Affiliated to Shanxi Medical University from May 2020 to September 2023. Anterior segment slit-lamp photographs and facial images were used as different modalities and analyzed from multiple perspectives. Two ophthalmologists with more than 10 years of clinical experience independently determined the reference CAS for each image. An ensemble deep learning model based on the residual network was constructed under supervised learning to predict five key inflammatory signs (redness of the eyelids and conjunctiva, and swelling of the eyelids, conjunctiva, and caruncle or plica) associated with TAO, and to integrate these objective signs with two subjective symptoms (spontaneous retrobulbar pain and pain on attempted upward or downward gaze) in order to assess TAO activity. Results The proposed model achieved 0.906 accuracy, 0.833 specificity, 0.906 precision, 0.906 recall, and 0.906 F1-score in active TAO diagnosis, demonstrating advanced performance in predicting CAS and TAO activity signs compared to conventional single-view unimodal approaches. The integration of multiple views and modalities, encompassing both anterior segment slit-lamp photographs and facial images, significantly improved the prediction accuracy of the model for TAO activity and CAS. Conclusion The ensemble multi-view multimodal deep learning system developed in this study can more accurately assess the clinical activity of TAO than traditional methods that solely rely on facial images. This innovative approach is intended to enhance the efficiency of TAO activity assessment, providing a novel means for its comprehensive, early, and precise evaluation.
Collapse
Affiliation(s)
- Chunfang Yan
- Shanxi Eye Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China
| | - Zhaoxia Zhang
- Shanxi Eye Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China
| | - Guanghua Zhang
- Shanxi Eye Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China
- School of Big Data Intelligent Diagnosis and Treatment Industry, Taiyuan University, Taiyuan, Shanxi, China
- College of Computer Science and Technology, Taiyuan Normal University, Taiyuan, Shanxi, China
| | - Han Liu
- Shanxi Eye Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China
| | - Ruiqi Zhang
- Shanxi Eye Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China
| | - Guiqin Liu
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, Guangdong, China
| | - Jing Rao
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, Guangdong, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, Guangdong, China
| | - Bin Sun
- Shanxi Eye Hospital Affiliated to Shanxi Medical University, Taiyuan, Shanxi, China
| |
Collapse
|
4
|
Lu MC, Deng C, Greenwald MF, Farsiu S, Prajna NV, Nallasamy N, Pawar M, Hart JN, S.R. S, Kochar P, Selvaraj S, Levine H, Amescua G, Sepulveda-Beltran PA, Niziol LM, Woodward MA. Automatic Classification of Slit-Lamp Photographs by Imaging Illumination. Cornea 2024; 43:419-424. [PMID: 37267474 PMCID: PMC10689570 DOI: 10.1097/ico.0000000000003318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 04/25/2023] [Indexed: 06/04/2023]
Abstract
PURPOSE The aim of this study was to facilitate deep learning systems in image annotations for diagnosing keratitis type by developing an automated algorithm to classify slit-lamp photographs (SLPs) based on illumination technique. METHODS SLPs were collected from patients with corneal ulcer at Kellogg Eye Center, Bascom Palmer Eye Institute, and Aravind Eye Care Systems. Illumination techniques were slit beam, diffuse white light, diffuse blue light with fluorescein, and sclerotic scatter (ScS). Images were manually labeled for illumination and randomly split into training, validation, and testing data sets (70%:15%:15%). Classification algorithms including MobileNetV2, ResNet50, LeNet, AlexNet, multilayer perceptron, and k-nearest neighborhood were trained to distinguish 4 type of illumination techniques. The algorithm performances on the test data set were evaluated with 95% confidence intervals (CIs) for accuracy, F1 score, and area under the receiver operator characteristics curve (AUC-ROC), overall and by class (one-vs-rest). RESULTS A total of 12,132 images from 409 patients were analyzed, including 41.8% (n = 5069) slit-beam photographs, 21.2% (2571) diffuse white light, 19.5% (2364) diffuse blue light, and 17.5% (2128) ScS. MobileNetV2 achieved the highest overall F1 score of 97.95% (CI, 97.94%-97.97%), AUC-ROC of 99.83% (99.72%-99.9%), and accuracy of 98.98% (98.97%-98.98%). The F1 scores for slit beam, diffuse white light, diffuse blue light, and ScS were 97.82% (97.80%-97.84%), 96.62% (96.58%-96.66%), 99.88% (99.87%-99.89%), and 97.59% (97.55%-97.62%), respectively. Slit beam and ScS were the 2 most frequently misclassified illumination. CONCLUSIONS MobileNetV2 accurately labeled illumination of SLPs using a large data set of corneal images. Effective, automatic classification of SLPs is key to integrating deep learning systems for clinical decision support into practice workflows.
Collapse
Affiliation(s)
- Ming-Chen Lu
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Callie Deng
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Miles F. Greenwald
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Sina Farsiu
- Departments of Biomedical Engineering, Duke University, Durham, NC, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC, USA
| | | | - Nambi Nallasamy
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Mercy Pawar
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Jenna N. Hart
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | | | | | | | - Harry Levine
- Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Guillermo Amescua
- Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Paula A. Sepulveda-Beltran
- Bascom Palmer Eye Institute, Department of Ophthalmology, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Leslie M. Niziol
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Maria A. Woodward
- Department of Ophthalmology and Visual Sciences, School of Medicine, University of Michigan, Ann Arbor, MI, USA
- Institute for Healthcare Policy and Innovation, University of Michigan, Ann Arbor, MI, USA
| | | |
Collapse
|
5
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
6
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
7
|
Li DJ, Huang BL, Peng Y. Comparisons of artificial intelligence algorithms in automatic segmentation for fungal keratitis diagnosis by anterior segment images. Front Neurosci 2023; 17:1195188. [PMID: 37360182 PMCID: PMC10285049 DOI: 10.3389/fnins.2023.1195188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 05/04/2023] [Indexed: 06/28/2023] Open
Abstract
Purpose This study combines automatic segmentation and manual fine-tuning with an early fusion method to provide efficient clinical auxiliary diagnostic efficiency for fungal keratitis. Methods First, 423 high-quality anterior segment images of keratitis were collected in the Department of Ophthalmology of the Jiangxi Provincial People's Hospital (China). The images were divided into fungal keratitis and non-fungal keratitis by a senior ophthalmologist, and all images were divided randomly into training and testing sets at a ratio of 8:2. Then, two deep learning models were constructed for diagnosing fungal keratitis. Model 1 included a deep learning model composed of the DenseNet 121, mobienet_v2, and squeezentet1_0 models, the least absolute shrinkage and selection operator (LASSO) model, and the multi-layer perception (MLP) classifier. Model 2 included an automatic segmentation program and the deep learning model already described. Finally, the performance of Model 1 and Model 2 was compared. Results In the testing set, the accuracy, sensitivity, specificity, F1-score, and the area under the receiver operating characteristic (ROC) curve (AUC) of Model 1 reached 77.65, 86.05, 76.19, 81.42%, and 0.839, respectively. For Model 2, accuracy improved by 6.87%, sensitivity by 4.43%, specificity by 9.52%, F1-score by 7.38%, and AUC by 0.086, respectively. Conclusion The models in our study could provide efficient clinical auxiliary diagnostic efficiency for fungal keratitis.
Collapse
Affiliation(s)
- Dong-Jin Li
- Health Management Center, The First People's Hospital of Jiujiang City, Jiujiang, Jiangxi, China
| | - Bing-Lin Huang
- College of Clinical Medicine, Jiangxi University of Traditional Chinese Medicine, Nanchang, Jiangxi, China
| | - Yuan Peng
- Department of Ophthalmology, The Affiliated Hospital of Jiangxi University of Traditional Chinese Medicine, Nanchang, Jiangxi, China
| |
Collapse
|
8
|
Yan Y, Jiang W, Zhou Y, Yu Y, Huang L, Wan S, Zheng H, Tian M, Wu H, Huang L, Wu L, Cheng S, Gao Y, Mao J, Wang Y, Cong Y, Deng Q, Shi X, Yang Z, Miao Q, Zheng B, Wang Y, Yang Y. Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images. Front Med (Lausanne) 2023; 10:1164188. [PMID: 37153082 PMCID: PMC10157182 DOI: 10.3389/fmed.2023.1164188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Objective In order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians' workload. Methods A total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman's membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance. Results The accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman's membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886. Conclusion A computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.
Collapse
Affiliation(s)
- Yulin Yan
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Weiyan Jiang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yiwen Zhou
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yi Yu
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Linying Huang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Shanshan Wan
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Hongmei Zheng
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Miao Tian
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Huiling Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Simin Cheng
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yuelan Gao
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Jiewen Mao
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yujin Wang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yuyu Cong
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Qian Deng
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Xiaoshuo Shi
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Zixian Yang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Qingmei Miao
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Biqing Zheng
- Department of Resources and Environmental Sciences, Resources and Environmental Sciences of Wuhan University, Wuhan, Hubei Province, China
| | - Yujing Wang
- Department of Ophthalmology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, China
| | - Yanning Yang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
- *Correspondence: Yanning Yang,
| |
Collapse
|
9
|
Li Z, Jiang J, Qiang W, Guo L, Liu X, Weng H, Wu S, Zheng Q, Chen W. Comparison of deep learning systems and cornea specialists in detecting corneal diseases from low-quality images. iScience 2021; 24:103317. [PMID: 34778732 PMCID: PMC8577078 DOI: 10.1016/j.isci.2021.103317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/11/2021] [Accepted: 10/15/2021] [Indexed: 01/01/2023] Open
Abstract
The performance of deep learning in disease detection from high-quality clinical images is identical to and even greater than that of human doctors. However, in low-quality images, deep learning performs poorly. Whether human doctors also have poor performance in low-quality images is unknown. Here, we compared the performance of deep learning systems with that of cornea specialists in detecting corneal diseases from low-quality slit lamp images. The results showed that the cornea specialists performed better than our previously established deep learning system (PEDLS) trained on only high-quality images. The performance of the system trained on both high- and low-quality images was superior to that of the PEDLS while inferior to that of a senior corneal specialist. This study highlights that cornea specialists perform better in low-quality images than the system trained on high-quality images. Adding low-quality images with sufficient diagnostic certainty to the training set can reduce this performance gap. Deep learning performs poorly in low-quality images for detecting corneal diseases Corneal specialists perform better than the PEDLS in low-quality images The performance of the NDLS is better than that of the PEDLS in low-quality images Adding low-quality images to the training set can improve the system's performance
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Liufei Guo
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China.,School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| |
Collapse
|
10
|
Li Z, Zhang X, Ding L, Du K, Yan J, Chan MTV, Wu WKK, Li S. Deep learning approach for guiding three-dimensional computed tomography reconstruction of lower limbs for robotically-assisted total knee arthroplasty. Int J Med Robot 2021; 17:e2300. [PMID: 34109730 DOI: 10.1002/rcs.2300] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 06/04/2021] [Accepted: 06/08/2021] [Indexed: 01/09/2023]
Abstract
BACKGROUND Robotic-assisted total knee arthroplasty (TKA) was performed to promote the accuracy of bone resection and mechanical alignment. Among these TKA system procedures, 3D reconstruction of CT data of lower limbs consumes significant manpower. Artificial intelligence (AI) algorithms applying deep learning has been proved efficient in automated identification and visual processing. METHODS CT data of a total of 200 lower limbs scanning were used for AI-based 3D model construction and CT data of 20 lower limbs scanning were utilised for verification. RESULTS We showed that the performance of an AI-guided 3D reconstruction of CT data of lower limbs for robotic-assisted TKA was similar to that of the operator-based approach. The time of 3D lower limb model construction using AI was 4.7 min. AI-based 3D models can be used for surgical planning. CONCLUSION AI was used for the first time to guide the 3D reconstruction of CT data of lower limbs for facilitating robotic-assisted TKA. Incorporation of AI in 3D model reconstruction before TKA might reduce the workload of radiologists.
Collapse
Affiliation(s)
- Zheng Li
- Department of Orthopaedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xiaofeng Zhang
- BEIJING HURWA-ROBOT Medical Technology Co. Ltd, Beijing, China
| | - Lele Ding
- BEIJING HURWA-ROBOT Medical Technology Co. Ltd, Beijing, China
| | - Kebin Du
- BEIJING HURWA-ROBOT Medical Technology Co. Ltd, Beijing, China
| | - Jun Yan
- BEIJING HURWA-ROBOT Medical Technology Co. Ltd, Beijing, China
| | - Matthew T V Chan
- Department of Anaesthesia and Intensive Care and Peter Hung Pain Research Institute, The Chinese University of Hong Kong, Hong Kong
| | - William K K Wu
- Department of Anaesthesia and Intensive Care and Peter Hung Pain Research Institute, The Chinese University of Hong Kong, Hong Kong.,State Key Laboratory of Digestive Diseases, Centre for Gut Microbiota Research, Institute of Digestive Diseases and LKS Institute of Health Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Shugang Li
- Department of Orthopaedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|