251
|
Wen J, Liu D, Wu Q, Zhao L, Iao WC, Lin H. Retinal image‐based artificial intelligence in detecting and predicting kidney diseases: Current advances and future perspectives. VIEW 2023. [DOI: 10.1002/viw.20220070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023] Open
Affiliation(s)
- Jingyi Wen
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Dong Liu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Qianni Wu
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Lanqin Zhao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Wai Cheng Iao
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
| | - Haotian Lin
- State Key Laboratory of OphthalmologyZhongshan Ophthalmic CenterSun Yat‐sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Disease GuangzhouChina
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics Zhongshan School of Medicine Sun Yat‐sen University Guangzhou China
| |
Collapse
|
252
|
Jia L, Wu W, Hou G, Zhao J, Qiang Y, Zhang Y, Cai M. Residual neural network with mixed loss based on batch training technique for identification of EGFR mutation status in lung cancer. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-21. [PMID: 37362735 PMCID: PMC10020767 DOI: 10.1007/s11042-023-14876-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 11/11/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
Epidermal growth factor receptor (EGFR) is the key to targeted therapy with tyrosine kinase inhibitors in lung cancer. Traditional identification of EGFR mutation status requires biopsy and sequence testing, which may not be suitable for certain groups who cannot perform biopsy. In this paper, using easily accessible and non-invasive CT images, the residual neural network (ResNet) with mixed loss based on batch training technique is proposed for identification of EGFR mutation status in lung cancer. In this model, the ResNet is regarded as the baseline for feature extraction to avoid the gradient disappearance. Besides, a new mixed loss based on the batch similarity and the cross entropy is proposed to guide the network to better learn the model parameters. The proposed mixed loss utilizes the similarity among batch samples to evaluate the distribution of training data, which can reduce the similarity of different classes and the difference of the same classes. In the experiments, VGG16Net, DenseNet, ResNet18, ResNet34 and ResNet50 models with the mixed loss are trained on the public CT dataset with 155 patients including EGFR mutation status from TCIA. The trained networks are employed to the collected preoperative CT dataset with 56 patients from the cooperative hospital for validating the efficiency of the proposed models. Experimental results show that the proposed models are more appropriate and effective on the lung cancer dataset for identifying the EGFR mutation status. In these models, the ResNet34 with mixed loss is optimal (accuracy = 81.58%, AUC = 0.8861, sensitivity = 80.02%, specificity = 82.90%).
Collapse
Affiliation(s)
- Liye Jia
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Wei Wu
- Department of Physiology, Shanxi Medical University, Taiyuan, 030051 China
| | - Guojie Hou
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Juanjuan Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Yanan Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| | - Meiling Cai
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600 China
| |
Collapse
|
253
|
Jiang X, Xie M, Ma L, Dong L, Li D. International publication trends in the application of artificial intelligence in ophthalmology research: an updated bibliometric analysis. ANNALS OF TRANSLATIONAL MEDICINE 2023; 11:219. [PMID: 37007552 PMCID: PMC10061466 DOI: 10.21037/atm-22-3773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 12/18/2022] [Indexed: 03/17/2023]
Abstract
Background The literature on artificial intelligence (AI)-related topics has been expanding rapidly over the last two decades, showing that AI is a crucial force in advancing ophthalmology. This analysis aims to provide a dynamic and longitudinal bibliometric analysis of AI-related ophthalmic papers. Methods The Web of Science was searched to retrieve papers regarding the application of AI in ophthalmology published in the English language up to May 2022. The variables were analyzed using Microsoft Excel 2019 and GraphPad Prism 9. Data visualization was performed using VOSviewer and CiteSpace. Results In this study, a total of 1,686 publications were analyzed. Recently, AI-related ophthalmology research has increased exponentially. China was the most productive country in this research field, with 483 articles, but the United States of America (446 publications) contributed most to the sum of citations and the H-index. The League of European Research Universities, Ting DSW, and Daniel SW were the most prolific institution and researchers. This field is primarily concerned with diabetic retinopathy (DR), glaucoma, optical coherence tomography, and the classification and diagnosis of fundus pictures. Current hotspots in AI research include deep learning, diagnosing and predicting systemic disorders by fundus images, incidence and progression of ocular diseases, and outcome prediction. Conclusions This analysis thoroughly reviews AI-related research in ophthalmology to help academics better comprehend the growth and possible practice consequences of AI. The association between eye and systemic biomarkers, telemedicine, real-world studies, and the development and application of new AI algorithms, such as visual converters, will continue to be research hotspots over the next few years.
Collapse
Affiliation(s)
- Xue Jiang
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Minyue Xie
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Lan Ma
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| | - Dongmei Li
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Capital Medical University, Beijing Tongren Hospital, Beijing, China
| |
Collapse
|
254
|
Automatic Diagnosis of Infectious Keratitis Based on Slit Lamp Images Analysis. J Pers Med 2023; 13:jpm13030519. [PMID: 36983701 PMCID: PMC10056612 DOI: 10.3390/jpm13030519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/05/2023] [Accepted: 03/09/2023] [Indexed: 03/15/2023] Open
Abstract
Infectious keratitis (IK) is a common ophthalmic emergency that requires prompt and accurate treatment. This study aimed to propose a deep learning (DL) system based on slit lamp images to automatically screen and diagnose infectious keratitis. This study established a dataset of 2757 slit lamp images from 744 patients, including normal cornea, viral keratitis (VK), fungal keratitis (FK), and bacterial keratitis (BK). Six different DL algorithms were developed and evaluated for the classification of infectious keratitis. Among all the models, the EffecientNetV2-M showed the best classification ability, with an accuracy of 0.735, a recall of 0.680, and a specificity of 0.904, which was also superior to two ophthalmologists. The area under the receiver operating characteristics curve (AUC) of the EffecientNetV2-M was 0.85; correspondingly, 1.00 for normal cornea, 0.87 for VK, 0.87 for FK, and 0.64 for BK. The findings suggested that the proposed DL system could perform well in the classification of normal corneas and different types of infectious keratitis, based on slit lamp images. This study proves the potential of the DL model to help ophthalmologists to identify infectious keratitis and improve the accuracy and efficiency of diagnosis.
Collapse
|
255
|
Patel C, Pande S, Sagathia V, Ranch K, Beladiya J, Boddu SHS, Jacob S, Al-Tabakha MM, Hassan N, Shahwan M. Nanocarriers for the Delivery of Neuroprotective Agents in the Treatment of Ocular Neurodegenerative Diseases. Pharmaceutics 2023; 15:837. [PMID: 36986699 PMCID: PMC10052766 DOI: 10.3390/pharmaceutics15030837] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/25/2023] [Accepted: 03/02/2023] [Indexed: 03/08/2023] Open
Abstract
Retinal neurodegeneration is considered an early event in the pathogenesis of several ocular diseases, such as diabetic retinopathy, age-related macular degeneration, and glaucoma. At present, there is no definitive treatment to prevent the progression or reversal of vision loss caused by photoreceptor degeneration and the death of retinal ganglion cells. Neuroprotective approaches are being developed to increase the life expectancy of neurons by maintaining their shape/function and thus prevent the loss of vision and blindness. A successful neuroprotective approach could prolong patients' vision functioning and quality of life. Conventional pharmaceutical technologies have been investigated for delivering ocular medications; however, the distinctive structural characteristics of the eye and the physiological ocular barriers restrict the efficient delivery of drugs. Recent developments in bio-adhesive in situ gelling systems and nanotechnology-based targeted/sustained drug delivery systems are receiving a lot of attention. This review summarizes the putative mechanism, pharmacokinetics, and mode of administration of neuroprotective drugs used to treat ocular disorders. Additionally, this review focuses on cutting-edge nanocarriers that demonstrated promising results in treating ocular neurodegenerative diseases.
Collapse
Affiliation(s)
- Chirag Patel
- Department of Pharmacology, L. M. College of Pharmacy, Ahmedabad 380009, India
| | - Sonal Pande
- Department of Pharmacology, L. M. College of Pharmacy, Ahmedabad 380009, India
| | - Vrunda Sagathia
- Department of Pharmacology, L. M. College of Pharmacy, Ahmedabad 380009, India
| | - Ketan Ranch
- Department of Pharmaceutics, L. M. College of Pharmacy, Ahmedabad 380009, India
| | - Jayesh Beladiya
- Department of Pharmacology, L. M. College of Pharmacy, Ahmedabad 380009, India
| | - Sai H. S. Boddu
- Department of Pharmaceutical Sciences, College of Pharmacy and Health Sciences, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Center of Medical and Bio-Allied Health Sciences Research, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| | - Shery Jacob
- Department of Pharmaceutical Sciences, College of Pharmacy, Gulf Medical University, Ajman P.O. Box 4184, United Arab Emirates
| | - Moawia M. Al-Tabakha
- Department of Pharmaceutical Sciences, College of Pharmacy and Health Sciences, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Center of Medical and Bio-Allied Health Sciences Research, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| | - Nageeb Hassan
- Center of Medical and Bio-Allied Health Sciences Research, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Department of Clinical Sciences, College of Pharmacy & Health Science, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| | - Moyad Shahwan
- Center of Medical and Bio-Allied Health Sciences Research, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Department of Clinical Sciences, College of Pharmacy & Health Science, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| |
Collapse
|
256
|
Artificial Intelligence for Diabetic Retinopathy Screening Using Color Retinal Photographs: From Development to Deployment. Ophthalmol Ther 2023; 12:1419-1437. [PMID: 36862308 PMCID: PMC10164194 DOI: 10.1007/s40123-023-00691-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 02/14/2023] [Indexed: 03/03/2023] Open
Abstract
Diabetic retinopathy (DR), a leading cause of preventable blindness, is expected to remain a growing health burden worldwide. Screening to detect early sight-threatening lesions of DR can reduce the burden of vision loss; nevertheless, the process requires intensive manual labor and extensive resources to accommodate the increasing number of patients with diabetes. Artificial intelligence (AI) has been shown to be an effective tool which can potentially lower the burden of screening DR and vision loss. In this article, we review the use of AI for DR screening on color retinal photographs in different phases of application, ranging from development to deployment. Early studies of machine learning (ML)-based algorithms using feature extraction to detect DR achieved a high sensitivity but relatively lower specificity. Robust sensitivity and specificity were achieved with the application of deep learning (DL), although ML is still used in some tasks. Public datasets were utilized in retrospective validations of the developmental phases in most algorithms, which require a large number of photographs. Large prospective clinical validation studies led to the approval of DL for autonomous screening of DR although the semi-autonomous approach may be preferable in some real-world settings. There have been few reports on real-world implementations of DL for DR screening. It is possible that AI may improve some real-world indicators for eye care in DR, such as increased screening uptake and referral adherence, but this has not been proven. The challenges in deployment may include workflow issues, such as mydriasis to lower ungradable cases; technical issues, such as integration into electronic health record systems and integration into existing camera systems; ethical issues, such as data privacy and security; acceptance of personnel and patients; and health-economic issues, such as the need to conduct health economic evaluations of using AI in the context of the country. The deployment of AI for DR screening should follow the governance model for AI in healthcare which outlines four main components: fairness, transparency, trustworthiness, and accountability.
Collapse
|
257
|
Teo ZL, Ting DSW. AI telemedicine screening in ophthalmology: health economic considerations. Lancet Glob Health 2023; 11:e318-e320. [PMID: 36702140 DOI: 10.1016/s2214-109x(23)00037-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 01/09/2023] [Indexed: 01/25/2023]
Affiliation(s)
- Zhen Ling Teo
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore
| | - Daniel Shu Wei Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
258
|
Artificial intelligence using deep learning to predict the anatomical outcome of rhegmatogenous retinal detachment surgery: a pilot study. Graefes Arch Clin Exp Ophthalmol 2023; 261:715-721. [PMID: 36303063 DOI: 10.1007/s00417-022-05884-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/14/2022] [Accepted: 10/21/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE To develop and evaluate an automated deep learning model to predict the anatomical outcome of rhegmatogenous retinal detachment (RRD) surgery. METHODS Six thousand six hundred and sixty-one digital images of RRD treated by vitrectomy and internal tamponade were collected from the British and Eire Association of Vitreoretinal Surgeons database. Each image was classified as a primary surgical success or a primary surgical failure. The synthetic minority over-sampling technique was used to address class imbalance. We adopted the state-of-the-art deep convolutional neural network architecture Inception v3 to train, validate, and test deep learning models to predict the anatomical outcome of RRD surgery. The area under the curve (AUC), sensitivity, and specificity for predicting the outcome of RRD surgery was calculated for the best predictive deep learning model. RESULTS The deep learning model was able to predict the anatomical outcome of RRD surgery with an AUC of 0.94, with a corresponding sensitivity of 73.3% and a specificity of 96%. CONCLUSION A deep learning model is capable of accurately predicting the anatomical outcome of RRD surgery. This fully automated model has potential application in surgical care of patients with RRD.
Collapse
|
259
|
Jian M, Chen H, Tao C, Li X, Wang G. Triple-DRNet: A triple-cascade convolution neural network for diabetic retinopathy grading using fundus images. Comput Biol Med 2023; 155:106631. [PMID: 36805216 DOI: 10.1016/j.compbiomed.2023.106631] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/29/2023] [Accepted: 02/04/2023] [Indexed: 02/10/2023]
Abstract
Diabetic Retinopathy (DR) is a universal ocular complication of diabetes patients and also the main disease that causes blindness in the world wide. Automatic and efficient DR grading acts a vital role in timely treatment. However, it is difficult to effectively distinguish different types of distinct lesions (such as neovascularization in proliferative DR, microaneurysms in mild NPDR, etc.) using traditional convolutional neural networks (CNN), which greatly affects the ultimate classification results. In this article, we propose a triple-cascade network model (Triple-DRNet) to solve the aforementioned issue. The Triple-DRNet effectively subdivides the classification of five types of DR as well as improves the grading performance which mainly includes the following aspects: (1) In the first stage, the network carries out two types of classification, namely DR and No DR. (2) In the second stage, the cascade network is intended to distinguish the two categories between PDR and NPDR. (3) The final cascade network will be designed to differentiate the mild, moderate and severe types in NPDR. Experimental results show that the ACC of the Triple-DRNet on the APTOS 2019 Blindness Detection dataset achieves 92.08%, and the QWK metric reaches 93.62%, which proves the effectiveness of the devised Triple-DRNet compared with other mainstream models.
Collapse
Affiliation(s)
- Muwei Jian
- School of Information Science and Technology, Linyi University, Linyi, China; School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
| | - Hongyu Chen
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Chen Tao
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Xiaoguang Li
- Faculty of Information Tecnology, Beijing University of Technology, Beijing, China.
| | - Gaige Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, China
| |
Collapse
|
260
|
Shen A, Chiang M, Pardeshi AA, McKean-Cowdin R, Varma R, Xu BY. Anterior segment biometric measurements explain misclassifications by a deep learning classifier for detecting gonioscopic angle closure. Br J Ophthalmol 2023; 107:349-354. [PMID: 34615666 PMCID: PMC8983788 DOI: 10.1136/bjophthalmol-2021-319058] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 09/18/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/AIMS To identify biometric parameters that explain misclassifications by a deep learning classifier for detecting gonioscopic angle closure in anterior segment optical coherence tomography (AS-OCT) images. METHODS Chinese American Eye Study (CHES) participants underwent gonioscopy and AS-OCT of each angle quadrant. A subset of CHES AS-OCT images were analysed using a deep learning classifier to detect positive angle closure based on manual gonioscopy by a reference human examiner. Parameter measurements were compared between four prediction classes: true positives (TPs), true negatives (TNs), false positives (FPs) and false negatives (FN). Logistic regression models were developed to differentiate between true and false predictions. Performance was assessed using area under the receiver operating curve (AUC) and classifier accuracy metrics. RESULTS 584 images from 127 participants were analysed, yielding 271 TPs, 224 TNs, 77 FPs and 12 FNs. Parameter measurements differed (p<0.001) between prediction classes among anterior segment parameters, including iris curvature (IC) and lens vault (LV), and angle parameters, including angle opening distance (AOD). FP resembled TP more than FN and TN in terms of anterior segment parameters (steeper IC and higher LV), but resembled TN more than TP and FN in terms of angle parameters (wider AOD). Models for detecting FP (AUC=0.752) and FN (AUC=0.838) improved classifier accuracy from 84.8% to 89.0%. CONCLUSIONS Misclassifications by an OCT-based deep learning classifier for detecting gonioscopic angle closure are explained by disagreement between anterior segment and angle parameters. This finding could be used to improve classifier performance and highlights differences between gonioscopic and AS-OCT definitions of angle closure.
Collapse
Affiliation(s)
- Alice Shen
- Department of Ophthalmology, USC Keck School of Medicine, Los Angeles, California, USA
| | - Michael Chiang
- Department of Ophthalmology, USC Keck School of Medicine, Los Angeles, California, USA
| | - Anmol A Pardeshi
- Department of Ophthalmology, USC Keck School of Medicine, Los Angeles, California, USA
| | - Roberta McKean-Cowdin
- Department of Preventive Medicine, USC Keck School of Medicine, Los Angeles, California, USA
| | - Rohit Varma
- Southern California Eye Institute, CHA Hollywood Presbyterian Medical Center, Los Angeles, California, USA
| | - Benjamin Y Xu
- Department of Ophthalmology, USC Keck School of Medicine, Los Angeles, California, USA
| |
Collapse
|
261
|
Artificial intelligence-assisted endoscopic ultrasound in the diagnosis of gastrointestinal stromal tumors: a meta-analysis. Surg Endosc 2023; 37:1649-1657. [PMID: 36100781 DOI: 10.1007/s00464-022-09597-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
BACKGROUND AND AIMS Endoscopic ultrasonography (EUS) is useful for the diagnosis of gastrointestinal stromal tumors (GISTs), but is limited by subjective interpretation. Studies on artificial intelligence (AI)-assisted diagnosis are under development. Here, we used a meta-analysis to evaluate the diagnostic performance of AI in the diagnosis of GISTs using EUS images. METHODS PubMed, Ovid Medline, Embase, Web of science, and the Cochrane Library databases were searched for studies based on the EUS using AI for the diagnosis of GISTs, and a meta-analysis was performed to examine the accuracy. RESULTS Overall, 7 studies were included in our meta-analysis. A total of 2431 patients containing more than 36,186 images were used as the overall dataset, of which 480 patients were used for the final testing. The pooled sensitivity, specificity, positive, and negative likelihood ratio (LR) of AI-assisted EUS for differentiating GISTs from other submucosal tumors (SMTs) were 0.92 (95% confidence interval [CI] 0.89-0.95), 0.82 (95% CI 0.75-0.87), 4.55 (95% CI 2.64-7.84), and 0.12 (95% CI 0.07-0.20), respectively. The summary diagnostic odds ratio (DOR) and the area under the curve were 64.70 (95% CI 23.83-175.69) and 0.950 (Q* = 0.891). CONCLUSIONS AI-assisted EUS showed high accuracy for the automatic endoscopic diagnosis of GISTs, which could be used as a valuable complementary method for the differentiation of SMTs in the future.
Collapse
|
262
|
Li HY, Dong L, Zhou WD, Wu HT, Zhang RH, Li YT, Yu CY, Wei WB. Development and validation of medical record-based logistic regression and machine learning models to diagnose diabetic retinopathy. Graefes Arch Clin Exp Ophthalmol 2023; 261:681-689. [PMID: 36239780 DOI: 10.1007/s00417-022-05854-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 09/08/2022] [Accepted: 09/30/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSES Many factors were reported to be associated with diabetic retinopathy (DR); however, their contributions remained unclear. We aimed to evaluate the prognostic and diagnostic accuracy of logistic regression and three machine learning models based on various medical records. METHODS This was a cross-sectional study. We investigated the prevalence and associations of DR among 757 participants aged 40 years or older in the 2005-2006 National Health and Nutrition Examination Survey (NHANES). We trained the models to predict if the participants had DR with 15 predictor variables. Area under the receiver operating characteristic (AUROC) and mean squared error (MSE) of each algorithm were compared in the external validation dataset using a replicate cohort from NHANES 2007-2008. RESULTS Among the 757 participants, 53 (7.00%) subjects had DR, the mean (standard deviation, SD) age was 57.7 (13.04), and 78.0% were male (n = 42). Logistic regression revealed that female gender (OR = 4.130, 95% CI: 1.820-9.380; P < 0.05), HbA1c (OR = 1.665, 95% CI: 1.197-2.317; P < 0.05), serum creatine level (OR = 2.952, 95% CI: 1.274-6.851; P < 0.05), and eGFR level (OR = 1.009, 95% CI: 1.000-1.014, P < 0.05) increased the risk of DR. The average performance obtained from internal validation was similar in all models (AUROC ≥ 0.945), and k-nearest neighbors (KNN) had the highest value with an AUROC of 0.984. In external validation, they remained robust or with modest reductions in discrimination with AUROC still ≥ 0.902, and KNN also performed the best with an AUROC of 0.982. Both logistic regression and machine learning models had good performance in the clinical diagnosis of DR. CONCLUSIONS This study highlights the utility of comparing traditional logistic regression to machine learning models. We found that logistic regression performed as well as optimized machine learning methods when classifying DR patients.
Collapse
Affiliation(s)
- He-Yan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Wen-Da Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Hao-Tian Wu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Rui-Heng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Yi-Tong Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Chu-Yao Yu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China
| | - Wen-Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, 1 Dong Jiao Min Lane, Beijing, 100730, China.
| |
Collapse
|
263
|
Liu L, Wu X, Lin D, Zhao L, Li M, Yun D, Lin Z, Pang J, Li L, Wu Y, Lai W, Xiao W, Shang Y, Feng W, Tan X, Li Q, Liu S, Lin X, Sun J, Zhao Y, Yang X, Ye Q, Zhong Y, Huang X, He Y, Fu Z, Xiang Y, Zhang L, Zhao M, Qu J, Xu F, Lu P, Li J, Xu F, Wei W, Dong L, Dai G, He X, Yan W, Zhu Q, Lu L, Zhang J, Zhou W, Meng X, Li S, Shen M, Jiang Q, Chen N, Zhou X, Li M, Wang Y, Zou H, Zhong H, Yang W, Shou W, Zhong X, Yang Z, Ding L, Hu Y, Tan G, He W, Zhao X, Chen Y, Liu Y, Lin H. DeepFundus: A flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Rep Med 2023; 4:100912. [PMID: 36669488 PMCID: PMC9975093 DOI: 10.1016/j.xcrm.2022.100912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023]
Abstract
Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weibo Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiao Tan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Qiang Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shenzhen Liu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinxin Lin
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiaxin Sun
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yiqi Zhao
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ximei Yang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qinying Ye
- Department of Ophthalmology, Second Affiliated Hospital, Guangdong Medical University, Zhanjiang, Guangdong, China
| | - Yuesi Zhong
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xi Huang
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan He
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Ziwei Fu
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Yi Xiang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Li Zhang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Mingwei Zhao
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Jinfeng Qu
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Peng Lu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | - Xingru He
- School of Public Health, He University, Shenyang, Liaoning, China
| | - Wentao Yan
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Qiaolin Zhu
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Linna Lu
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiaying Zhang
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Zhou
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Xiangda Meng
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Shiying Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Mei Shen
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Nan Chen
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Xingtao Zhou
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Meiyan Li
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Yan Wang
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Haohan Zou
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Hua Zhong
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wenyan Yang
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wulin Shou
- Jiaxing Chaoju Eye Hospital, Jiaxing, Zhejiang, China
| | - Xingwu Zhong
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Zhenduo Yang
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China
| | - Yongcheng Hu
- Bayannur Xudong Eye Hospital, Bayannur, Inner Mongolia, China
| | - Gang Tan
- Department of Ophthalmology, The First Affiliated Hospital, Hengyang Medical School, University of South China, Hengyang, Hunan, China
| | - Wanji He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
264
|
Li Z, Chen W. Solving data quality issues of fundus images in real-world settings by ophthalmic AI. Cell Rep Med 2023; 4:100951. [PMID: 36812885 PMCID: PMC9975325 DOI: 10.1016/j.xcrm.2023.100951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Liu et al.1 develop a deep-learning-based flow cytometry-like image quality classifier, DeepFundus, for the automated, high-throughput, and multidimensional classification of fundus image quality. DeepFundus significantly improves the real-world performance of established artificial intelligence diagnostics in detecting multiple retinopathies.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| |
Collapse
|
265
|
Joseph N, Benetz BA, Chirra P, Menegay H, Oellerich S, Baydoun L, Melles GRJ, Lass JH, Wilson DL. Machine Learning Analysis of Postkeratoplasty Endothelial Cell Images for the Prediction of Future Graft Rejection. Transl Vis Sci Technol 2023; 12:22. [PMID: 36790821 PMCID: PMC9940770 DOI: 10.1167/tvst.12.2.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
Purpose This study developed machine learning (ML) classifiers of postoperative corneal endothelial cell images to identify postkeratoplasty patients at risk for allograft rejection within 1 to 24 months of treatment. Methods Central corneal endothelium specular microscopic images were obtained from 44 patients after Descemet membrane endothelial keratoplasty (DMEK), half of whom had experienced graft rejection. After deep learning segmentation of images from all patients' last and second-to-last imaging, time points prior to rejection were analyzed (175 and 168, respectively), and 432 quantitative features were extracted assessing cellular spatial arrangements and cell intensity values. Random forest (RF) and logistic regression (LR) models were trained on novel-to-this-application features from single time points, delta-radiomics, and traditional morphometrics (endothelial cell density, coefficient of variation, hexagonality) via 10 iterations of threefold cross-validation. Final assessments were evaluated on a held-out test set. Results ML classifiers trained on novel-to-this-application features outperformed those trained on traditional morphometrics for predicting future graft rejection. RF and LR models predicted post-DMEK patients' allograft rejection in the held-out test set with >0.80 accuracy. RF models trained on novel features from second-to-last time points and delta-radiomics predicted post-DMEK patients' rejection with >0.70 accuracy. Cell-graph spatial arrangement, intensity, and shape features were most indicative of graft rejection. Conclusions ML classifiers successfully predicted future graft rejections 1 to 24 months prior to clinically apparent rejection. This technology could aid clinicians to identify patients at risk for graft rejection and guide treatment plans accordingly. Translational Relevance Our software applies ML techniques to clinical images and enhances patient care by detecting preclinical keratoplasty rejection.
Collapse
Affiliation(s)
- Naomi Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Beth Ann Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - Prathyush Chirra
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Harry Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - Silke Oellerich
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands
| | - Lamis Baydoun
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands,University Eye Hospital Münster, Münster, Germany,ELZA Institute Dietikon/Zurich, Zurich, Switzerland
| | - Gerrit R. J. Melles
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands,NIIOS-USA, San Diego, CA, USA
| | - Jonathan H. Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
266
|
Peeters F, Rommes S, Elen B, Gerrits N, Stalmans I, Jacob J, De Boever P. Artificial Intelligence Software for Diabetic Eye Screening: Diagnostic Performance and Impact of Stratification. J Clin Med 2023; 12:jcm12041408. [PMID: 36835942 PMCID: PMC9967595 DOI: 10.3390/jcm12041408] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 01/31/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
AIM To evaluate the MONA.health artificial intelligence screening software for detecting referable diabetic retinopathy (DR) and diabetic macular edema (DME), including subgroup analysis. METHODS The algorithm's threshold value was fixed at the 90% sensitivity operating point on the receiver operating curve to perform the disease classification. Diagnostic performance was appraised on a private test set and publicly available datasets. Stratification analysis was executed on the private test set considering age, ethnicity, sex, insulin dependency, year of examination, camera type, image quality, and dilatation status. RESULTS The software displayed an area under the curve (AUC) of 97.28% for DR and 98.08% for DME on the private test set. The specificity and sensitivity for combined DR and DME predictions were 94.24 and 90.91%, respectively. The AUC ranged from 96.91 to 97.99% on the publicly available datasets for DR. AUC values were above 95% in all subgroups, with lower predictive values found for individuals above the age of 65 (82.51% sensitivity) and Caucasians (84.03% sensitivity). CONCLUSION We report good overall performance of the MONA.health screening software for DR and DME. The software performance remains stable with no significant deterioration of the deep learning models in any studied strata.
Collapse
Affiliation(s)
- Freya Peeters
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Correspondence:
| | - Stef Rommes
- MONA.health, 3060 Bertem, Belgium
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Nele Gerrits
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Ingeborg Stalmans
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
| | - Julie Jacob
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
| | - Patrick De Boever
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
- Centre for Environmental Sciences, Hasselt University, Diepenbeek, 3500 Hasselt, Belgium
| |
Collapse
|
267
|
Wang H, Meng X, Tang Q, Hao Y, Luo Y, Li J. Development and Application of a Standardized Testset for an Artificial Intelligence Medical Device Intended for the Computer-Aided Diagnosis of Diabetic Retinopathy. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:7139560. [PMID: 36818382 PMCID: PMC9931476 DOI: 10.1155/2023/7139560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/21/2022] [Accepted: 11/24/2022] [Indexed: 02/10/2023]
Abstract
Objective To explore a centralized approach to build test sets and assess the performance of an artificial intelligence medical device (AIMD) which is intended for computer-aided diagnosis of diabetic retinopathy (DR). Method A framework was proposed to conduct data collection, data curation, and annotation. Deidentified colour fundus photographs were collected from 11 partner hospitals with raw labels. Photographs with sensitive information or authenticity issues were excluded during vetting. A team of annotators was recruited through qualification examinations and trained. The annotation process included three steps: initial annotation, review, and arbitration. The annotated data then composed a standardized test set, which was further imported to algorithms under test (AUT) from different developers. The algorithm outputs were compared with the final annotation results (reference standard). Result The test set consists of 6327 digital colour fundus photographs. The final labels include 5 stages of DR and non-DR, as well as other ocular diseases and photographs with unacceptable quality. The Fleiss Kappa was 0.75 among the annotators. The Cohen's kappa between raw labels and final labels is 0.5. Using this test set, five AUTs were tested and compared quantitatively. The metrics include accuracy, sensitivity, and specificity. The AUTs showed inhomogeneous capabilities to classify different types of fundus photographs. Conclusions This article demonstrated a workflow to build standardized test sets and conduct algorithm testing of the AIMD for computer-aided diagnosis of diabetic retinopathy. It may provide a reference to develop technical standards that promote product verification and quality control, improving the comparability of products.
Collapse
Affiliation(s)
- Hao Wang
- Institute for Medical Device Control, National Institutes for Food and Drug Control, 31 Huatuo Rd, Beijing 102629, China
| | - Xiangfeng Meng
- Institute for Medical Device Control, National Institutes for Food and Drug Control, 31 Huatuo Rd, Beijing 102629, China
| | - Qiaohong Tang
- Institute for Medical Device Control, National Institutes for Food and Drug Control, 31 Huatuo Rd, Beijing 102629, China
| | - Ye Hao
- Institute for Medical Device Control, National Institutes for Food and Drug Control, 31 Huatuo Rd, Beijing 102629, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, No. 54 Xianlie South Road, Yuexiu District, Guangzhou 510060, Guangdong, China
| | - Jiage Li
- Institute for Medical Device Control, National Institutes for Food and Drug Control, 31 Huatuo Rd, Beijing 102629, China
| |
Collapse
|
268
|
A Neural Network for Automated Image Quality Assessment of Optic Disc Photographs. J Clin Med 2023; 12:jcm12031217. [PMID: 36769865 PMCID: PMC9917571 DOI: 10.3390/jcm12031217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
This study describes the development of a convolutional neural network (CNN) for automated assessment of optic disc photograph quality. Using a code-free deep learning platform, a total of 2377 optic disc photographs were used to develop a deep CNN capable of determining optic disc photograph quality. Of these, 1002 were good-quality images, 609 were acceptable-quality, and 766 were poor-quality images. The dataset was split 80/10/10 into training, validation, and test sets and balanced for quality. A ternary classification model (good, acceptable, and poor quality) and a binary model (usable, unusable) were developed. In the ternary classification system, the model had an overall accuracy of 91% and an AUC of 0.98. The model had higher predictive accuracy for images of good (93%) and poor quality (96%) than for images of acceptable quality (91%). The binary model performed with an overall accuracy of 98% and an AUC of 0.99. When validated on 292 images not included in the original training/validation/test dataset, the model's accuracy was 85% on the three-class classification task and 97% on the binary classification task. The proposed system for automated image-quality assessment for optic disc photographs achieves high accuracy in both ternary and binary classification systems, and highlights the success achievable with a code-free platform. There is wide clinical and research potential for such a model, with potential applications ranging from integration into fundus camera software to provide immediate feedback to ophthalmic photographers, to prescreening large databases before their use in research.
Collapse
|
269
|
Wu Y, Olvera-Barrios A, Yanagihara R, Kung TPH, Lu R, Leung I, Mishra AV, Nussinovitch H, Grimaldi G, Blazes M, Lee CS, Egan C, Tufail A, Lee AY. Training Deep Learning Models to Work on Multiple Devices by Cross-Domain Learning with No Additional Annotations. Ophthalmology 2023; 130:213-222. [PMID: 36154868 PMCID: PMC9868052 DOI: 10.1016/j.ophtha.2022.09.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/07/2022] [Accepted: 09/16/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE To create an unsupervised cross-domain segmentation algorithm for segmenting intraretinal fluid and retinal layers on normal and pathologic macular OCT images from different manufacturers and camera devices. DESIGN We sought to use generative adversarial networks (GANs) to generalize a segmentation model trained on one OCT device to segment B-scans obtained from a different OCT device manufacturer in a fully unsupervised approach without labeled data from the latter manufacturer. PARTICIPANTS A total of 732 OCT B-scans from 4 different OCT devices (Heidelberg Spectralis, Topcon 1000, Maestro2, and Zeiss Plex Elite 9000). METHODS We developed an unsupervised GAN model, GANSeg, to segment 7 retinal layers and intraretinal fluid in Topcon 1000 OCT images (domain B) that had access only to labeled data on Heidelberg Spectralis images (domain A). GANSeg was unsupervised because it had access only to 110 Heidelberg labeled OCTs and 556 raw and unlabeled Topcon 1000 OCTs. To validate GANSeg segmentations, 3 masked graders manually segmented 60 OCTs from an external Topcon 1000 test dataset independently. To test the limits of GANSeg, graders also manually segmented 3 OCTs from Zeiss Plex Elite 9000 and Topcon Maestro2. A U-Net was trained on the same labeled Heidelberg images as baseline. The GANSeg repository with labeled annotations is at https://github.com/uw-biomedical-ml/ganseg. MAIN OUTCOME MEASURES Dice scores comparing segmentation results from GANSeg and the U-Net model with the manual segmented images. RESULTS Although GANSeg and U-Net achieved comparable Dice scores performance as human experts on the labeled Heidelberg test dataset, only GANSeg achieved comparable Dice scores with the best performance for the ganglion cell layer plus inner plexiform layer (90%; 95% confidence interval [CI], 68%-96%) and the worst performance for intraretinal fluid (58%; 95% CI, 18%-89%), which was statistically similar to human graders (79%; 95% CI, 43%-94%). GANSeg significantly outperformed the U-Net model. Moreover, GANSeg generalized to both Zeiss and Topcon Maestro2 swept-source OCT domains, which it had never encountered before. CONCLUSIONS GANSeg enables the transfer of supervised deep learning algorithms across OCT devices without labeled data, thereby greatly expanding the applicability of deep learning algorithms.
Collapse
Affiliation(s)
- Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Ryan Yanagihara
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | | | - Randy Lu
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Irene Leung
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Amit V Mishra
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | | | - Gabriela Grimaldi
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; Roger and Angie Karalis Johnson Retina Center, Seattle, Washington.
| |
Collapse
|
270
|
Domínguez C, Heras J, Mata E, Pascual V, Royo D, Zapata MÁ. Binary and multi-class automated detection of age-related macular degeneration using convolutional- and transformer-based architectures. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107302. [PMID: 36528999 DOI: 10.1016/j.cmpb.2022.107302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 09/05/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Age-related macular degeneration (AMD) is an eye disease that happens when ageing causes damage to the macula, and it is the leading cause of blindness in developed countries. Screening retinal fundus images allows ophthalmologists to early detect, diagnose and treat this disease; however, the manual interpretation of images is a time-consuming task. In this paper, we aim to study different deep learning methods to diagnose AMD. METHODS We have conducted a thorough study of two families of deep learning models based on convolutional neural networks (CNN) and transformer architectures to automatically diagnose referable/non-referable AMD, and grade AMD severity scales (no AMD, early AMD, intermediate AMD, and advanced AMD). In addition, we have analysed several progressive resizing strategies and ensemble methods for convolutional-based architectures to further improve the performance of the models. RESULTS As a first result, we have shown that transformer-based architectures obtain considerably worse results than convolutional-based architectures for diagnosing AMD. Moreover, we have built a model for diagnosing referable AMD that yielded a mean F1-score (SD) of 92.60% (0.47), a mean AUROC (SD) of 97.53% (0.40), and a mean weighted kappa coefficient (SD) of 85.28% (0.91); and an ensemble of models for grading AMD severity scales with a mean accuracy (SD) of 82.55% (2.92), and a mean weighted kappa coefficient (SD) of 84.76% (2.45). CONCLUSIONS This work shows that working with convolutional based architectures is more suitable than using transformer based models for classifying and grading AMD from retinal fundus images. Furthermore, convolutional models can be improved by means of progressive resizing strategies and ensemble methods.
Collapse
Affiliation(s)
- César Domínguez
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Jónathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, Spain.
| | - Eloy Mata
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Vico Pascual
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | | | - Miguel Ángel Zapata
- UPRetina, Barcelona, Spain; Hospital Vall Hebron, Passeig Roser 126, Sant Cugat del Vallés, 08195 Barcelona, Spain
| |
Collapse
|
271
|
Soh ZD, Jiang Y, S/O Ganesan SS, Zhou M, Nongiur M, Majithia S, Tham YC, Rim TH, Qian C, Koh V, Aung T, Wong TY, Xu X, Liu Y, Cheng CY. From 2 dimensions to 3rd dimension: Quantitative prediction of anterior chamber depth from anterior segment photographs via deep-learning. PLOS DIGITAL HEALTH 2023; 2:e0000193. [PMID: 36812642 PMCID: PMC9931242 DOI: 10.1371/journal.pdig.0000193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 01/06/2023] [Indexed: 02/04/2023]
Abstract
Anterior chamber depth (ACD) is a major risk factor of angle closure disease, and has been used in angle closure screening in various populations. However, ACD is measured from ocular biometer or anterior segment optical coherence tomography (AS-OCT), which are costly and may not be readily available in primary care and community settings. Thus, this proof-of-concept study aims to predict ACD from low-cost anterior segment photographs (ASPs) using deep-learning (DL). We included 2,311 pairs of ASPs and ACD measurements for algorithm development and validation, and 380 pairs for algorithm testing. We captured ASPs with a digital camera mounted on a slit-lamp biomicroscope. Anterior chamber depth was measured with ocular biometer (IOLMaster700 or Lenstar LS9000) in data used for algorithm development and validation, and with AS-OCT (Visante) in data used for testing. The DL algorithm was modified from the ResNet-50 architecture, and assessed using mean absolute error (MAE), coefficient-of-determination (R2), Bland-Altman plot and intraclass correlation coefficients (ICC). In validation, our algorithm predicted ACD with a MAE (standard deviation) of 0.18 (0.14) mm; R2 = 0.63. The MAE of predicted ACD was 0.18 (0.14) mm in eyes with open angles and 0.19 (0.14) mm in eyes with angle closure. The ICC between actual and predicted ACD measurements was 0.81 (95% CI 0.77, 0.84). In testing, our algorithm predicted ACD with a MAE of 0.23 (0.18) mm; R2 = 0.37. Saliency maps highlighted the pupil and its margin as the main structures used in ACD prediction. This study demonstrates the possibility of predicting ACD from ASPs via DL. This algorithm mimics an ocular biometer in making its prediction, and provides a foundation to predict other quantitative measurements that are relevant to angle closure screening.
Collapse
Affiliation(s)
- Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yixing Jiang
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | | | - Menghan Zhou
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Monisha Nongiur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shivani Majithia
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Chaoxu Qian
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Victor Koh
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
- Tsinghua Medicine, Tsinghua University, China
| | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
272
|
Khalili Pour E, Rezaee K, Azimi H, Mirshahvalad SM, Jafari B, Fadakar K, Faghihi H, Mirshahi A, Ghassemi F, Ebrahimiadib N, Mirghorbani M, Bazvand F, Riazi-Esfahani H, Riazi Esfahani M. Automated machine learning-based classification of proliferative and non-proliferative diabetic retinopathy using optical coherence tomography angiography vascular density maps. Graefes Arch Clin Exp Ophthalmol 2023; 261:391-399. [PMID: 36050474 DOI: 10.1007/s00417-022-05818-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/07/2022] [Accepted: 08/23/2022] [Indexed: 01/17/2023] Open
Abstract
PURPOSE The study aims to classify the eyes with proliferative diabetic retinopathy (PDR) and non-proliferative diabetic retinopathy (NPDR) based on the optical coherence tomography angiography (OCTA) vascular density maps using a supervised machine learning algorithm. METHODS OCTA vascular density maps (at superficial capillary plexus (SCP), deep capillary plexus (DCP), and total retina (R) levels) of 148 eyes from 78 patients with diabetic retinopathy (45 PDR and 103 NPDR) was used to classify the images to NPDR and PDR groups based on a supervised machine learning algorithm known as the support vector machine (SVM) classifier optimized by a genetic evolutionary algorithm. RESULTS The implemented algorithm in three different models reached up to 85% accuracy in classifying PDR and NPDR in all three levels of vascular density maps. The deep retinal layer vascular density map demonstrated the best performance with a 90% accuracy in discriminating between PDR and NPDR. CONCLUSIONS The current study on a limited number of patients with diabetic retinopathy demonstrated that a supervised machine learning-based method known as SVM can be used to differentiate PDR and NPDR patients using OCTA vascular density maps.
Collapse
Affiliation(s)
- Elias Khalili Pour
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran
| | - Hossein Azimi
- Faculty of Mathematical Sciences and Computer, Kharazmi University, Tehran, Iran
| | - Seyed Mohammad Mirshahvalad
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Behzad Jafari
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Kaveh Fadakar
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hooshang Faghihi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Ahmad Mirshahi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fariba Ghassemi
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Nazanin Ebrahimiadib
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Masoud Mirghorbani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Fatemeh Bazvand
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran
| | - Hamid Riazi-Esfahani
- Retina Service, Eye Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Qazvin Square, South Karegar Street, Tehran, Iran.
| | - Mohammad Riazi Esfahani
- Department of Ophthalmology, Gavin Herbert Eye Institute, University of California Irvine, Irvine, CA, USA
| |
Collapse
|
273
|
Kuwahara T, Hara K, Mizuno N, Haba S, Okuno N, Kuraishi Y, Fumihara D, Yanaidani T, Ishikawa S, Yasuda T, Yamada M, Onishi S, Yamada K, Tanaka T, Tajika M, Niwa Y, Yamaguchi R, Shimizu Y. Artificial intelligence using deep learning analysis of endoscopic ultrasonography images for the differential diagnosis of pancreatic masses. Endoscopy 2023; 55:140-149. [PMID: 35688454 DOI: 10.1055/a-1873-7920] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
BACKGROUND : There are several types of pancreatic mass, so it is important to distinguish between them before treatment. Artificial intelligence (AI) is a mathematical technique that automates learning and recognition of data patterns. This study aimed to investigate the efficacy of our AI model using endoscopic ultrasonography (EUS) images of multiple types of pancreatic mass (pancreatic ductal adenocarcinoma [PDAC], pancreatic adenosquamous carcinoma [PASC], acinar cell carcinoma [ACC], metastatic pancreatic tumor [MPT], neuroendocrine carcinoma [NEC], neuroendocrine tumor [NET], solid pseudopapillary neoplasm [SPN], chronic pancreatitis, and autoimmune pancreatitis [AIP]). METHODS : Patients who underwent EUS were included in this retrospective study. The included patients were divided into training, validation, and test cohorts. Using these cohorts, an AI model that can distinguish pancreatic carcinomas from noncarcinomatous pancreatic lesions was developed using a deep-learning architecture and the diagnostic performance of the AI model was evaluated. RESULTS : 22 000 images were generated from 933 patients. The area under the curve, sensitivity, specificity, and accuracy (95 %CI) of the AI model for the diagnosis of pancreatic carcinomas in the test cohort were 0.90 (0.84-0.97), 0.94 (0.88-0.98), 0.82 (0.68-0.92), and 0.91 (0.85-0.95), respectively. The per-category sensitivities (95 %CI) of each disease were PDAC 0.96 (0.90-0.99), PASC 1.00 (0.05-1.00), ACC 1.00 (0.22-1.00), MPT 0.33 (0.01-0.91), NEC 1.00 (0.22-1.00), NET 0.93 (0.66-1.00), SPN 1.00 (0.22-1.00), chronic pancreatitis 0.78 (0.52-0.94), and AIP 0.73 (0.39-0.94). CONCLUSIONS : Our developed AI model can distinguish pancreatic carcinomas from noncarcinomatous pancreatic lesions, but external validation is needed.
Collapse
Affiliation(s)
- Takamichi Kuwahara
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Kazuo Hara
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Nobumasa Mizuno
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Shin Haba
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Nozomi Okuno
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Yasuhiro Kuraishi
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Daiki Fumihara
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Takafumi Yanaidani
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Sho Ishikawa
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Tsukasa Yasuda
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Masanori Yamada
- Department of Gastroenterology, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Sachiyo Onishi
- Department of Endoscopy, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Keisaku Yamada
- Department of Endoscopy, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Tsutomu Tanaka
- Department of Endoscopy, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Masahiro Tajika
- Department of Endoscopy, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Yasumasa Niwa
- Department of Endoscopy, Aichi Cancer Center Hospital, Nagoya, Japan
| | - Rui Yamaguchi
- Division of Cancer Systems Biology, Aichi Cancer Center Research Institute, Nagoya, Japan
- Division of Cancer Informatics, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Yasuhiro Shimizu
- Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, Nagoya, Japan
| |
Collapse
|
274
|
Morano J, Hervella ÁS, Rouco J, Novo J, Fernández-Vigo JI, Ortega M. Weakly-supervised detection of AMD-related lesions in color fundus images using explainable deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107296. [PMID: 36481530 DOI: 10.1016/j.cmpb.2022.107296] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 11/16/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, AMD is the most frequent cause of blindness in developed countries. Although some promising treatments have been proposed that effectively slow down its development, their effectiveness significantly diminishes in the advanced stages. This emphasizes the importance of large-scale screening programs for early detection. Nevertheless, implementing such programs for a disease like AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. For the characterization of the disease, clinicians have to identify and localize certain retinal lesions. All this motivates the development of automatic diagnostic methods. In this sense, several works have achieved highly positive results for AMD detection using convolutional neural networks (CNNs). However, none of them incorporates explainability mechanisms linking the diagnosis to its related lesions to help clinicians to better understand the decisions of the models. This is specially relevant, since the absence of such mechanisms limits the application of automatic methods in the clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. METHODS In our proposal, a CNN with a custom architectural setting is trained end-to-end for the joint identification of AMD and its associated retinal lesions. With the proposed setting, the lesion identification is directly derived from independent lesion activation maps; then, the diagnosis is obtained from the identified lesions. The training is performed end-to-end using image-level labels. Thus, lesion-specific activation maps are learned in a weakly-supervised manner. The provided lesion information is of high clinical interest, as it allows clinicians to assess the developmental stage of the disease. Additionally, the proposed approach allows to explain the diagnosis obtained by the models directly from the identified lesions and their corresponding activation maps. The training data necessary for the approach can be obtained without much extra work on the part of clinicians, since the lesion information is habitually present in medical records. This is an important advantage over other methods, including fully-supervised lesion segmentation methods, which require pixel-level labels whose acquisition is arduous. RESULTS The experiments conducted in 4 different datasets demonstrate that the proposed approach is able to identify AMD and its associated lesions with satisfactory performance. Moreover, the evaluation of the lesion activation maps shows that the models trained using the proposed approach are able to identify the pathological areas within the image and, in most cases, to correctly determine to which lesion they correspond. CONCLUSIONS The proposed approach provides meaningful information-lesion identification and lesion activation maps-that conveniently explains and complements the diagnosis, and is of particular interest to clinicians for the diagnostic process. Moreover, the data needed to train the networks using the proposed approach is commonly easy to obtain, what represents an important advantage in fields with particularly scarce data, such as medical imaging.
Collapse
Affiliation(s)
- José Morano
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José I Fernández-Vigo
- Department of Ophthalmology, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria (IdISSC), Madrid, Spain; Department of Ophthalmology, Centro Internacional de Oftalmología Avanzada, Madrid, Spain.
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| |
Collapse
|
275
|
Fea AM, Ricardi F, Novarese C, Cimorosi F, Vallino V, Boscia G. Precision Medicine in Glaucoma: Artificial Intelligence, Biomarkers, Genetics and Redox State. Int J Mol Sci 2023; 24:2814. [PMID: 36769127 PMCID: PMC9917798 DOI: 10.3390/ijms24032814] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/07/2023] [Accepted: 01/18/2023] [Indexed: 02/05/2023] Open
Abstract
Glaucoma is a multifactorial neurodegenerative illness requiring early diagnosis and strict monitoring of the disease progression. Current exams for diagnosis and prognosis are based on clinical examination, intraocular pressure (IOP) measurements, visual field tests, and optical coherence tomography (OCT). In this scenario, there is a critical unmet demand for glaucoma-related biomarkers to enhance clinical testing for early diagnosis and tracking of the disease's development. The introduction of validated biomarkers would allow for prompt intervention in the clinic to help with prognosis prediction and treatment response monitoring. This review aims to report the latest acquisitions on biomarkers in glaucoma, from imaging analysis to genetics and metabolic markers.
Collapse
|
276
|
Ting DSJ, Deshmukh R, Ting DSW, Ang M. Big data in corneal diseases and cataract: Current applications and future directions. Front Big Data 2023; 6:1017420. [PMID: 36818823 PMCID: PMC9929069 DOI: 10.3389/fdata.2023.1017420] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/16/2023] [Indexed: 02/04/2023] Open
Abstract
The accelerated growth in electronic health records (EHR), Internet-of-Things, mHealth, telemedicine, and artificial intelligence (AI) in the recent years have significantly fuelled the interest and development in big data research. Big data refer to complex datasets that are characterized by the attributes of "5 Vs"-variety, volume, velocity, veracity, and value. Big data analytics research has so far benefitted many fields of medicine, including ophthalmology. The availability of these big data not only allow for comprehensive and timely examinations of the epidemiology, trends, characteristics, outcomes, and prognostic factors of many diseases, but also enable the development of highly accurate AI algorithms in diagnosing a wide range of medical diseases as well as discovering new patterns or associations of diseases that are previously unknown to clinicians and researchers. Within the field of ophthalmology, there is a rapidly expanding pool of large clinical registries, epidemiological studies, omics studies, and biobanks through which big data can be accessed. National corneal transplant registries, genome-wide association studies, national cataract databases, and large ophthalmology-related EHR-based registries (e.g., AAO IRIS Registry) are some of the key resources. In this review, we aim to provide a succinct overview of the availability and clinical applicability of big data in ophthalmology, particularly from the perspective of corneal diseases and cataract, the synergistic potential of big data, AI technologies, internet of things, mHealth, and wearable smart devices, and the potential barriers for realizing the clinical and research potential of big data in this field.
Collapse
Affiliation(s)
- Darren S. J. Ting
- Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, United Kingdom,Birmingham and Midland Eye Centre, Birmingham, United Kingdom,Academic Ophthalmology, School of Medicine, University of Nottingham, Nottingham, United Kingdom,*Correspondence: Darren S. J. Ting ✉
| | - Rashmi Deshmukh
- Department of Cornea and Refractive Surgery, LV Prasad Eye Institute, Hyderabad, India
| | - Daniel S. W. Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore,Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore (NUS) Medical School, Singapore, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore,Department of Ophthalmology and Visual Sciences, Duke-National University of Singapore (NUS) Medical School, Singapore, Singapore
| |
Collapse
|
277
|
Zhou B, Rao X, Xing H, Ma Y, Wang F, Rong L. A convolutional neural network-based system for detecting early gastric cancer in white-light endoscopy. Scand J Gastroenterol 2023; 58:157-162. [PMID: 36000979 DOI: 10.1080/00365521.2022.2113427] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND White-light endoscopy (WLE) is a main and standard modality for detection of early gastric cancer (EGC). The detection rate of EGC is not satisfactory so far. In this single-center retrospective study we developed a convolutional neural network (CNN)-based system to automatically detect EGC in WLE images. METHODS An EGC detecting system was constructed based on the CNN architecture EfficientDet. We trained our system with a data set including 4527 images from 130 cases (cancerous images, 1737; noncancerous images, 2790). Then we tested its performance with a data set including 1243 images from 64 cases (cancerous images, 445; noncancerous images, 798). RESULTS For case-based analysis, our system successfully detected EGC in 63 of 64 cases and the sensitivity was 98.4%. For image-based analysis, the accuracy was 88.3%. The sensitivity, specificity, positive predictive value and negative predictive value were 84.5%, 90.5%, 83.2% and 91.3%, respectively. The most common cause for false positives was gastritis (57.9%). The most common cause for false negatives was that the lesion was too small with a diameter of 10 mm or less (44.9%). CONCLUSION Our CNN-based EGC detecting system was able to achieve satisfactory sensitivity for detecting EGC in WLE images and shows great potential in assisting endoscopists with the detection of EGC.
Collapse
Affiliation(s)
- Bin Zhou
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Xiaolong Rao
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Haoqiang Xing
- Thunder Software Technology Co., Ltd, Beijing, China
| | - Yongchen Ma
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Feng Wang
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| | - Long Rong
- Department of Endoscopy Center, Peking University First Hospital, Beijing, China
| |
Collapse
|
278
|
Nunez do Rio JM, Nderitu P, Raman R, Rajalakshmi R, Kim R, Rani PK, Sivaprasad S, Bergeles C. Using deep learning to detect diabetic retinopathy on handheld non-mydriatic retinal images acquired by field workers in community settings. Sci Rep 2023; 13:1392. [PMID: 36697482 PMCID: PMC9876892 DOI: 10.1038/s41598-023-28347-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 01/17/2023] [Indexed: 01/26/2023] Open
Abstract
Diabetic retinopathy (DR) at risk of vision loss (referable DR) needs to be identified by retinal screening and referred to an ophthalmologist. Existing automated algorithms have mostly been developed from images acquired with high cost mydriatic retinal cameras and cannot be applied in the settings used in most low- and middle-income countries. In this prospective multicentre study, we developed a deep learning system (DLS) that detects referable DR from retinal images acquired using handheld non-mydriatic fundus camera by non-technical field workers in 20 sites across India. Macula-centred and optic-disc-centred images from 16,247 eyes (9778 participants) were used to train and cross-validate the DLS and risk factor based logistic regression models. The DLS achieved an AUROC of 0.99 (1000 times bootstrapped 95% CI 0.98-0.99) using two-field retinal images, with 93.86 (91.34-96.08) sensitivity and 96.00 (94.68-98.09) specificity at the Youden's index operational point. With single field inputs, the DLS reached AUROC of 0.98 (0.98-0.98) for the macula field and 0.96 (0.95-0.98) for the optic-disc field. Intergrader performance was 90.01 (88.95-91.01) sensitivity and 96.09 (95.72-96.42) specificity. The image based DLS outperformed all risk factor-based models. This DLS demonstrated a clinically acceptable performance for the identification of referable DR despite challenging image capture conditions.
Collapse
Affiliation(s)
- Joan M Nunez do Rio
- Institute of Ophthalmology, University College London, 11-43 Bath St., London, EC1V 9EL, UK.
- Section of Ophthalmology, King's College London, London, WC2R 2LS, UK.
| | - Paul Nderitu
- Institute of Ophthalmology, University College London, 11-43 Bath St., London, EC1V 9EL, UK
- Section of Ophthalmology, King's College London, London, WC2R 2LS, UK
| | | | - Ramachandran Rajalakshmi
- Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, India
| | | | - Padmaja K Rani
- Anand Bajaj Retina Institute, Srimati Kannuri Santhamma Centre for Vitreoretinal Diseases, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Sobha Sivaprasad
- Institute of Ophthalmology, University College London, 11-43 Bath St., London, EC1V 9EL, UK
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, SE1 7EU, UK
| |
Collapse
|
279
|
Diao J, Chen X, Shen Y, Li J, Chen Y, He L, Chen S, Mou P, Ma X, Wei R. Research progress and application of artificial intelligence in thyroid associated ophthalmopathy. Front Cell Dev Biol 2023; 11:1124775. [PMID: 36760363 PMCID: PMC9903073 DOI: 10.3389/fcell.2023.1124775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 01/12/2023] [Indexed: 01/25/2023] Open
Abstract
Thyroid-associated ophthalmopathy (TAO) is a complicated orbitopathy related to dysthyroid, which severely destroys the facial appearance and life quality without medical interference. The diagnosis and management of thyroid-associated ophthalmopathy are extremely intricate, as the number of professional ophthalmologists is limited and inadequate compared with the number of patients. Nowadays, medical applications based on artificial intelligence (AI) algorithms have been developed, which have proved effective in screening many chronic eye diseases. The advanced characteristics of automated artificial intelligence devices, such as rapidity, portability, and multi-platform compatibility, have led to significant progress in the early diagnosis and elaborate evaluation of these diseases in clinic. This study aimed to provide an overview of recent artificial intelligence applications in clinical diagnosis, activity and severity grading, and prediction of therapeutic outcomes in thyroid-associated ophthalmopathy. It also discussed the current challenges and future prospects of the development of artificial intelligence applications in treating thyroid-associated ophthalmopathy.
Collapse
|
280
|
End-to-End Deep-Learning-Based Diagnosis of Benign and Malignant Orbital Tumors on Computed Tomography Images. J Pers Med 2023; 13:jpm13020204. [PMID: 36836437 PMCID: PMC9960119 DOI: 10.3390/jpm13020204] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/16/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023] Open
Abstract
Determining the nature of orbital tumors is challenging for current imaging interpretation methods, which hinders timely treatment. This study aimed to propose an end-to-end deep learning system to automatically diagnose orbital tumors. A multi-center dataset of 602 non-contrast-enhanced computed tomography (CT) images were prepared. After image annotation and preprocessing, the CT images were used to train and test the deep learning (DL) model for the following two stages: orbital tumor segmentation and classification. The performance on the testing set was compared with the assessment of three ophthalmologists. For tumor segmentation, the model achieved a satisfactory performance, with an average dice similarity coefficient of 0.89. The classification model had an accuracy of 86.96%, a sensitivity of 80.00%, and a specificity of 94.12%. The area under the receiver operating characteristics curve (AUC) of the 10-fold cross-validation ranged from 0.8439 to 0.9546. There was no significant difference on diagnostic performance of the DL-based system and three ophthalmologists (p > 0.05). The proposed end-to-end deep learning system could deliver accurate segmentation and diagnosis of orbital tumors based on noninvasive CT images. Its effectiveness and independence from human interaction allow the potential for tumor screening in the orbit and other parts of the body.
Collapse
|
281
|
Gende M, de Moura J, Novo J, Penedo MG, Ortega M. A new generative approach for optical coherence tomography data scarcity: unpaired mutual conversion between scanning presets. Med Biol Eng Comput 2023; 61:1093-1112. [PMID: 36680707 PMCID: PMC10083164 DOI: 10.1007/s11517-022-02742-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 12/09/2022] [Indexed: 01/22/2023]
Abstract
In optical coherence tomography (OCT), there is a trade-off between the scanning time and image quality, leading to a scarcity of high quality data. OCT platforms provide different scanning presets, producing visually distinct images, limiting their compatibility. In this work, a fully automatic methodology for the unpaired visual conversion of the two most prevalent scanning presets is proposed. Using contrastive unpaired translation generative adversarial architectures, low quality images acquired with the faster Macular Cube preset can be converted to the visual style of high visibility Seven Lines scans and vice-versa. This modifies the visual appearance of the OCT images generated by each preset while preserving natural tissue structure. The quality of original and synthetic generated images was compared using BRISQUE. The synthetic generated images achieved very similar scores to original images of their target preset. The generative models were validated in automatic and expert separability tests. These models demonstrated they were able to replicate the genuine look of the original images. This methodology has the potential to create multi-preset datasets with which to train robust computer-aided diagnosis systems by exposing them to the visual features of different presets they may encounter in real clinical scenarios without having to obtain additional data. Graphical Abstract Unpaired mutual conversion between scanning presets. Two generative adversarial models are trained for the conversion of OCT images into images of another scanning preset, replicating the visual features that characterise said preset.
Collapse
Affiliation(s)
- Mateo Gende
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain.,Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain
| | - Joaquim de Moura
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain. .,Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain.
| | - Jorge Novo
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain.,Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain
| | - Manuel G Penedo
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain.,Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain
| | - Marcos Ortega
- Grupo, VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, Xubias de Arriba, 84, A Coruña, 15006, A Coruña, Spain.,Centro de investigación, CITIC, Universidade da Coruña, Campus de Elviña s/n, A Coruña, 15071, A Coruña, Spain
| |
Collapse
|
282
|
Zhao K, Zhu X, Zhang M, Xie Z, Yan X, Wu S, Liao P, Lu H, Shen W, Fu C, Cui H, He C, Fang Q, Mei J. Radiologists with assistance of deep learning can achieve overall accuracy of benign-malignant differentiation of musculoskeletal tumors comparable with that of pre-surgical biopsies in the literature. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02838-w. [PMID: 36653517 DOI: 10.1007/s11548-023-02838-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 01/09/2023] [Indexed: 01/19/2023]
Abstract
PURPOSE The purpose of this study was to assess if radiologists assisted by deep learning (DL) algorithms can achieve diagnostic accuracy comparable to that of pre-surgical biopsies in benign-malignant differentiation of musculoskeletal tumors (MST). METHODS We first conducted a systematic review of literature to get the respective overall diagnostic accuracies of fine-needle aspiration biopsy (FNAB) and core needle biopsy (CNB) in differentiating between benign and malignant MST, by synthesizing data from the articles meeting our inclusion criteria. To compared against the accuracies reported in literature, we then invited 4 radiologists, respectively with 2 (A), 6 (B), 7 (C), and 33 (D) years of experience in interpreting musculoskeletal MRI to perform diagnostic tests on our own dataset (n = 62), with and without assistance of a previously developed DL algorithm. The gold standard for benign-malignant differentiation was histopathologic confirmation or clinical/radiographic follow-up. RESULTS For FNAB, a meta-analysis containing 4604 samples met the inclusion criteria, with the overall diagnostic accuracy reported to be 0.77. For CNB, an overall accuracy of 0.86 was derived by synthesizing results from 7 original research articles containing a total of 587 samples. On our internal MST dataset, the invited radiologists, respectively, achieved diagnostic accuracies of 0.84 (A), 0.89 (B), 0.87 (C), and 0.90 (D), with the assistance of DL. CONCLUSION Use of DL algorithms on musculoskeletal dynamic contrast-enhanced MRI improved the benign-malignant differentiation accuracy of radiologists to a level comparable to that of pre-surgical biopsies. The developed DL algorithms have a potential to lower the risk of miss-diagnosing malignancy in radiological practice.
Collapse
Affiliation(s)
- Keyang Zhao
- Department of Orthopedic Surgery, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, No. 600 Yishan Road, Shanghai, 200233, China
| | - Xiaozhong Zhu
- Department of Orthopedic Surgery, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, No. 600 Yishan Road, Shanghai, 200233, China
| | - Mingzi Zhang
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Zhaozhi Xie
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200233, China
| | - Xu Yan
- Department of Orthopedic Surgery, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, No. 600 Yishan Road, Shanghai, 200233, China
| | - Shenghui Wu
- Department of Orthopedic Surgery, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, No. 600 Yishan Road, Shanghai, 200233, China
| | - Peng Liao
- Department of Orthopedic Surgery, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, No. 600 Yishan Road, Shanghai, 200233, China
| | - Hongtao Lu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200233, China
| | - Wei Shen
- MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, 200233, China
| | - Chicheng Fu
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Haoyang Cui
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Chuan He
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Qu Fang
- Shanghai Aitrox Technology Corporation Limited, Shanghai, China
| | - Jiong Mei
- Department of Orthopedic Surgery, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, No. 600 Yishan Road, Shanghai, 200233, China.
| |
Collapse
|
283
|
Deep Learning in Optical Coherence Tomography Angiography: Current Progress, Challenges, and Future Directions. Diagnostics (Basel) 2023; 13:diagnostics13020326. [PMID: 36673135 PMCID: PMC9857993 DOI: 10.3390/diagnostics13020326] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 01/18/2023] Open
Abstract
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions.
Collapse
|
284
|
Noguez Imm R, Muñoz-Benitez J, Medina D, Barcenas E, Molero-Castillo G, Reyes-Ortega P, Hughes-Cano JA, Medrano-Gracia L, Miranda-Anaya M, Rojas-Piloni G, Quiroz-Mercado H, Hernández-Zimbrón LF, Fajardo-Cruz ED, Ferreyra-Severo E, García-Franco R, Rubio Mijangos JF, López-Star E, García-Roa M, Lansingh VC, Thébault SC. Preventable risk factors for type 2 diabetes can be detected using noninvasive spontaneous electroretinogram signals. PLoS One 2023; 18:e0278388. [PMID: 36634073 PMCID: PMC9836271 DOI: 10.1371/journal.pone.0278388] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 11/15/2022] [Indexed: 01/13/2023] Open
Abstract
Given the ever-increasing prevalence of type 2 diabetes and obesity, the pressure on global healthcare is expected to be colossal, especially in terms of blindness. Electroretinogram (ERG) has long been perceived as a first-use technique for diagnosing eye diseases, and some studies suggested its use for preventable risk factors of type 2 diabetes and thereby diabetic retinopathy (DR). Here, we show that in a non-evoked mode, ERG signals contain spontaneous oscillations that predict disease cases in rodent models of obesity and in people with overweight, obesity, and metabolic syndrome but not yet diabetes, using one single random forest-based model. Classification performance was both internally and externally validated, and correlation analysis showed that the spontaneous oscillations of the non-evoked ERG are altered before oscillatory potentials, which are the current gold-standard for early DR. Principal component and discriminant analysis suggested that the slow frequency (0.4-0.7 Hz) components are the main discriminators for our predictive model. In addition, we established that the optimal conditions to record these informative signals, are 5-minute duration recordings under daylight conditions, using any ERG sensors, including ones working with portative, non-mydriatic devices. Our study provides an early warning system with promising applications for prevention, monitoring and even the development of new therapies against type 2 diabetes.
Collapse
Affiliation(s)
- Ramsés Noguez Imm
- Instituto de Neurobiología y Universidad Nacional Autónoma de México (UNAM), Campus UNAM-Juriquilla, Querétaro, Mexico
| | - Julio Muñoz-Benitez
- Facultad de Ingeniería, Universidad Nacional Autónoma de México (UNAM), Ciudad Universitaria, Ciudad de México, Mexico
| | - Diego Medina
- Facultad de Ingeniería, Universidad Nacional Autónoma de México (UNAM), Ciudad Universitaria, Ciudad de México, Mexico
| | - Everardo Barcenas
- Facultad de Ingeniería, Universidad Nacional Autónoma de México (UNAM), Ciudad Universitaria, Ciudad de México, Mexico
| | - Guillermo Molero-Castillo
- Facultad de Ingeniería, Universidad Nacional Autónoma de México (UNAM), Ciudad Universitaria, Ciudad de México, Mexico
| | - Pamela Reyes-Ortega
- Instituto de Neurobiología y Universidad Nacional Autónoma de México (UNAM), Campus UNAM-Juriquilla, Querétaro, Mexico
| | - Jorge Armando Hughes-Cano
- Instituto de Neurobiología y Universidad Nacional Autónoma de México (UNAM), Campus UNAM-Juriquilla, Querétaro, Mexico
| | | | - Manuel Miranda-Anaya
- Unidad Multidisciplinaria de Docencia e Investigación-Facultad de Ciencias, Universidad Nacional Autónoma de México (UNAM), Campus UNAM-Juriquilla, Querétaro, Mexico
| | - Gerardo Rojas-Piloni
- Instituto de Neurobiología y Universidad Nacional Autónoma de México (UNAM), Campus UNAM-Juriquilla, Querétaro, Mexico
| | | | - Luis Fernando Hernández-Zimbrón
- Research Department, Asociación Para Evitar la Ceguera, Mexico City, Mexico
- Clínica de Salud Visual, Escuela Nacional de Estudios Superiores, Unidad León, Universidad Nacional Autonóma de México (UNAM), León, Guanajuato, Mexico
| | | | | | - Renata García-Franco
- Instituto de la Retina del Bajío (INDEREB), Prolongación Constituyentes 302 (Consultorios 410 y 411, torre 3, Hospital San José), El jacal, Santiago de Querétaro, Querétaro, Mexico
| | - Juan Fernando Rubio Mijangos
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, Santiago de Querétaro, Querétaro, Mexico
| | - Ellery López-Star
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, Santiago de Querétaro, Querétaro, Mexico
| | - Marlon García-Roa
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, Santiago de Querétaro, Querétaro, Mexico
| | - Van Charles Lansingh
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, Santiago de Querétaro, Querétaro, Mexico
| | - Stéphanie C. Thébault
- Instituto de Neurobiología y Universidad Nacional Autónoma de México (UNAM), Campus UNAM-Juriquilla, Querétaro, Mexico
| |
Collapse
|
285
|
Ge Z, Wang B, Chang J, Yu Z, Zhou Z, Zhang J, Duan Z. Using deep learning and explainable artificial intelligence to assess the severity of gastroesophageal reflux disease according to the Los Angeles Classification System. Scand J Gastroenterol 2023; 58:596-604. [PMID: 36625026 DOI: 10.1080/00365521.2022.2163185] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
OBJECTIVES Gastroesophageal reflux disease (GERD) is a complex disease with a high worldwide prevalence. The Los Angeles classification (LA-grade) system is meaningful for assessing the endoscopic severity of GERD. Deep learning (DL) methods have been widely used in the field of endoscopy. However, few DL-assisted researches have concentrated on the diagnosis of GERD. This study is the first to develop a five-category classification DL model based on the LA-grade using explainable artificial intelligence (XAI). MATERIALS AND METHODS A total of 2081 endoscopic images were used for the development of a DL model, and the classification accuracy of the models and endoscopists with different levels of experience was compared. RESULTS Some mainstream DL models were utilized, of which DenseNet-121 outperformed. The area under the curve (AUC) of the DenseNet-121 was 0.968, and its classification accuracy (86.7%) was significantly higher than that of junior (71.5%) and experienced (77.4%) endoscopists. An XAI evaluation was also performed to explore the perception consistency between the DL model and endoscopists, which showed meaningful results for real-world applications. CONCLUSIONS The DL model showed a potential in improving the accuracy of endoscopists in LA-grading of GERD, and it has noticeable clinical application prospects and is worthy of further promotion.
Collapse
Affiliation(s)
- Zhenyang Ge
- Department of Gastroenterology, The First Affiliated Hospital of Dalian Medical University, Dalian, China.,Department of Digestive Endoscopy, Dalian Municipal Central Hospital, Dalian, China
| | - Bowen Wang
- Science and Technology, Graduate School of Information, Osaka University, Yamadaoka, Osaka, Japan
| | - Jiuyang Chang
- Department of Cardiology, The First Affiliated Hospital of Dalian Medical University, Dalian, China.,Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka University, Yamadaoka, Osaka, Japan
| | - Zequn Yu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Zhenyuan Zhou
- Information Management Department, Dalian Municipal Central Hospital, Dalian, China
| | - Jing Zhang
- Department of Digestive Endoscopy, Dalian Municipal Central Hospital, Dalian, China
| | - Zhijun Duan
- Department of Gastroenterology, The First Affiliated Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
286
|
Segmentation-Assisted Fully Convolutional Neural Network Enhances Deep Learning Performance to Identify Proliferative Diabetic Retinopathy. J Clin Med 2023; 12:jcm12010385. [PMID: 36615186 PMCID: PMC9821182 DOI: 10.3390/jcm12010385] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/27/2022] [Accepted: 12/29/2022] [Indexed: 01/05/2023] Open
Abstract
With the progression of diabetic retinopathy (DR) from the non-proliferative (NPDR) to proliferative (PDR) stage, the possibility of vision impairment increases significantly. Therefore, it is clinically important to detect the progression to PDR stage for proper intervention. We propose a segmentation-assisted DR classification methodology, that builds on (and improves) current methods by using a fully convolutional network (FCN) to segment retinal neovascularizations (NV) in retinal images prior to image classification. This study utilizes the Kaggle EyePacs dataset, containing retinal photographs from patients with varying degrees of DR (mild, moderate, severe NPDR and PDR. Two graders annotated the NV (a board-certified ophthalmologist and a trained medical student). Segmentation was performed by training an FCN to locate neovascularization on 669 retinal fundus photographs labeled with PDR status according to NV presence. The trained segmentation model was used to locate probable NV in images from the classification dataset. Finally, a CNN was trained to classify the combined images and probability maps into categories of PDR. The mean accuracy of segmentation-assisted classification was 87.71% on the test set (SD = 7.71%). Segmentation-assisted classification of PDR achieved accuracy that was 7.74% better than classification alone. Our study shows that segmentation assistance improves identification of the most severe stage of diabetic retinopathy and has the potential to improve deep learning performance in other imaging problems with limited data availability.
Collapse
|
287
|
Pan Y, Liu J, Cai Y, Yang X, Zhang Z, Long H, Zhao K, Yu X, Zeng C, Duan J, Xiao P, Li J, Cai F, Yang X, Tan Z. Fundus image classification using Inception V3 and ResNet-50 for the early diagnostics of fundus diseases. Front Physiol 2023; 14:1126780. [PMID: 36875027 PMCID: PMC9975334 DOI: 10.3389/fphys.2023.1126780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 01/27/2023] [Indexed: 02/17/2023] Open
Abstract
Purpose: We aim to present effective and computer aided diagnostics in the field of ophthalmology and improve eye health. This study aims to create an automated deep learning based system for categorizing fundus images into three classes: normal, macular degeneration and tessellated fundus for the timely recognition and treatment of diabetic retinopathy and other diseases. Methods: A total of 1,032 fundus images were collected from 516 patients using fundus camera from Health Management Center, Shenzhen University General Hospital Shenzhen University, Shenzhen 518055, Guangdong, China. Then, Inception V3 and ResNet-50 deep learning models are used to classify fundus images into three classes, Normal, Macular degeneration and tessellated fundus for the timely recognition and treatment of fundus diseases. Results: The experimental results show that the effect of model recognition is the best when the Adam is used as optimizer method, the number of iterations is 150, and 0.00 as the learning rate. According to our proposed approach we, achieved the highest accuracy of 93.81% and 91.76% by using ResNet-50 and Inception V3 after fine-tuned and adjusted hyper parameters according to our classification problem. Conclusion: Our research provides a reference to the clinical diagnosis or screening for diabetic retinopathy and other eye diseases. Our suggested computer aided diagnostics framework will prevent incorrect diagnoses caused by the low image quality and individual experience, and other factors. In future implementations, the ophthalmologists can implement more advanced learning algorithms to improve the accuracy of diagnosis.
Collapse
Affiliation(s)
- Yuhang Pan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Junru Liu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Yuting Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Xuemei Yang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Zhucheng Zhang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Hong Long
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Ketong Zhao
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Xia Yu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Cui Zeng
- General Practice Alliance, Shenzhen, Guangdong, China.,University Town East Community Health Service Center, Shenzhen, Guangdong, China
| | - Jueni Duan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Ping Xiao
- Department of Otorhinolaryngology Head and Neck Surgery, Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Jingbo Li
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Feiyue Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China.,General Practice Alliance, Shenzhen, Guangdong, China
| | - Xiaoyun Yang
- Ophthalmology Department, Shenzhen OCT Hospital, Shenzhen, Guangdong, China
| | - Zhen Tan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China.,General Practice Alliance, Shenzhen, Guangdong, China
| |
Collapse
|
288
|
Du J, Huang M, Liu L. AI-Aided Disease Prediction in Visualized Medicine. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1199:107-126. [PMID: 37460729 DOI: 10.1007/978-981-32-9902-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Artificial intelligence (AI) is playing a vitally important role in promoting the revolution of future technology. Healthcare is one of the promising applications in AI, which covers medical imaging, diagnosis, robotics, disease prediction, pharmacy, health management, and hospital management. Numbers of achievements that made in these fields overturn every aspect in traditional healthcare system. Therefore, to understand the state-of-art AI in healthcare, as well as the chances and obstacles in its development, the applications of AI in disease detection and outlook and the future trends of AI-aided disease prediction were discussed in this chapter.
Collapse
Affiliation(s)
- Juan Du
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
| | - Mengen Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Lin Liu
- Tianjin Key Laboratory of Retinal Functions and Diseases, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
289
|
Yousefi S. Clinical Applications of Artificial Intelligence in Glaucoma. J Ophthalmic Vis Res 2023; 18:97-112. [PMID: 36937202 PMCID: PMC10020779 DOI: 10.18502/jovr.v18i1.12730] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/05/2022] [Indexed: 02/25/2023] Open
Abstract
Ophthalmology is one of the major imaging-intensive fields of medicine and thus has potential for extensive applications of artificial intelligence (AI) to advance diagnosis, drug efficacy, and other treatment-related aspects of ocular disease. AI has made impressive progress in ophthalmology within the past few years and two autonomous AI-enabled systems have received US regulatory approvals for autonomously screening for mid-level or advanced diabetic retinopathy and macular edema. While no autonomous AI-enabled system for glaucoma screening has yet received US regulatory approval, numerous assistive AI-enabled software tools are already employed in commercialized instruments for quantifying retinal images and visual fields to augment glaucoma research and clinical practice. In this literature review (non-systematic), we provide an overview of AI applications in glaucoma, and highlight some limitations and considerations for AI integration and adoption into clinical practice.
Collapse
Affiliation(s)
- Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
290
|
Benchmark datasets driving artificial intelligence development fail to capture the needs of medical professionals. J Biomed Inform 2023; 137:104274. [PMID: 36539106 DOI: 10.1016/j.jbi.2022.104274] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 11/22/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
Publicly accessible benchmarks that allow for assessing and comparing model performances are important drivers of progress in artificial intelligence (AI). While recent advances in AI capabilities hold the potential to transform medical practice by assisting and augmenting the cognitive processes of healthcare professionals, the coverage of clinically relevant tasks by AI benchmarks is largely unclear. Furthermore, there is a lack of systematized meta-information that allows clinical AI researchers to quickly determine accessibility, scope, content and other characteristics of datasets and benchmark datasets relevant to the clinical domain. To address these issues, we curated and released a comprehensive catalogue of datasets and benchmarks pertaining to the broad domain of clinical and biomedical natural language processing (NLP), based on a systematic review of literature and. A total of 450 NLP datasets were manually systematized and annotated with rich metadata, such as targeted tasks, clinical applicability, data types, performance metrics, accessibility and licensing information, and availability of data splits. We then compared tasks covered by AI benchmark datasets with relevant tasks that medical practitioners reported as highly desirable targets for automation in a previous empirical study. Our analysis indicates that AI benchmarks of direct clinical relevance are scarce and fail to cover most work activities that clinicians want to see addressed. In particular, tasks associated with routine documentation and patient data administration workflows are not represented despite significant associated workloads. Thus, currently available AI benchmarks are improperly aligned with desired targets for AI automation in clinical settings, and novel benchmarks should be created to fill these gaps.
Collapse
|
291
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
292
|
Shiihara H, Sonoda S, Terasaki H, Fujiwara K, Funatsu R, Shiba Y, Kumagai Y, Honda N, Sakamoto T. Wayfinding artificial intelligence to detect clinically meaningful spots of retinal diseases: Artificial intelligence to help retina specialists in real world practice. PLoS One 2023; 18:e0283214. [PMID: 36972243 PMCID: PMC10042340 DOI: 10.1371/journal.pone.0283214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 02/20/2023] [Indexed: 03/29/2023] Open
Abstract
AIM/BACKGROUND To aim of this study is to develop an artificial intelligence (AI) that aids in the thought process by providing retinal clinicians with clinically meaningful or abnormal findings rather than just a final diagnosis, i.e., a "wayfinding AI." METHODS Spectral domain optical coherence tomography B-scan images were classified into 189 normal and 111 diseased eyes. These were automatically segmented using a deep-learning based boundary-layer detection model. During segmentation, the AI model calculates the probability of the boundary surface of the layer for each A-scan. If this probability distribution is not biased toward a single point, layer detection is defined as ambiguous. This ambiguity was calculated using entropy, and a value referred to as the ambiguity index was calculated for each OCT image. The ability of the ambiguity index to classify normal and diseased images and the presence or absence of abnormalities in each layer of the retina were evaluated based on the area under the curve (AUC). A heatmap, i.e., an ambiguity-map, of each layer, that changes the color according to the ambiguity index value, was also created. RESULTS The ambiguity index of the overall retina of the normal and disease-affected images (mean ± SD) were 1.76 ± 0.10 and 2.06 ± 0.22, respectively, with a significant difference (p < 0.05). The AUC used to distinguish normal and disease-affected images using the ambiguity index was 0.93, and was 0.588 for the internal limiting membrane boundary, 0.902 for the nerve fiber layer/ganglion cell layer boundary, 0.920 for the inner plexiform layer/inner nuclear layer boundary, 0.882 for the outer plexiform layer/outer nuclear layer boundary, 0.926 for the ellipsoid zone line, and 0.866 for the retinal pigment epithelium/Bruch's membrane boundary. Three representative cases reveal the usefulness of an ambiguity map. CONCLUSIONS The present AI algorithm can pinpoint abnormal retinal lesions in OCT images, and its localization is known at a glance when using an ambiguity map. This will help diagnose the processes of clinicians as a wayfinding tool.
Collapse
Affiliation(s)
- Hideki Shiihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Shozo Sonoda
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
- Sonoda Eye Clinic, Kagoshima, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Kazuki Fujiwara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Ryoh Funatsu
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | | | | | | | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| |
Collapse
|
293
|
Li Z, Guo X, Zhang J, Liu X, Chang R, He M. Using deep leaning models to detect ophthalmic diseases: A comparative study. Front Med (Lausanne) 2023; 10:1115032. [PMID: 36936225 PMCID: PMC10014566 DOI: 10.3389/fmed.2023.1115032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 02/03/2023] [Indexed: 03/05/2023] Open
Abstract
Purpose The aim of this study was to prospectively quantify the level of agreement among the deep learning system, non-physician graders, and general ophthalmologists with different levels of clinical experience in detecting referable diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy. Methods Deep learning systems for diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy classification, with accuracy proven through internal and external validation, were established using 210,473 fundus photographs. Five trained non-physician graders and 47 general ophthalmologists from China were chosen randomly and included in the analysis. A test set of 300 fundus photographs were randomly identified from an independent dataset of 42,388 gradable images. The grading outcomes of five retinal and five glaucoma specialists were used as the reference standard that was considered achieved when ≥50% of gradings were consistent among the included specialists. The area under receiver operator characteristic curve of different groups in relation to the reference standard was used to compare agreement for referable diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy. Results The test set included 45 images (15.0%) with referable diabetic retinopathy, 46 (15.3%) with age-related macular degeneration, 46 (15.3%) with glaucomatous optic neuropathy, and 163 (55.4%) without these diseases. The area under receiver operator characteristic curve for non-physician graders, ophthalmologists with 3-5 years of clinical practice, ophthalmologists with 5-10 years of clinical practice, ophthalmologists with >10 years of clinical practice, and the deep learning system for referable diabetic retinopathy were 0.984, 0.964, 0.965, 0.954, and 0.990 (p = 0.415), respectively. The results for referable age-related macular degeneration were 0.912, 0.933, 0.946, 0.958, and 0.945, respectively, (p = 0.145), and 0.675, 0.862, 0.894, 0.976, and 0.994 for referable glaucomatous optic neuropathy, respectively (p < 0.001). Conclusion The findings of this study suggest that the accuracy of this deep learning system is comparable to that of trained non-physician graders and general ophthalmologists for referable diabetic retinopathy and age-related macular degeneration, but the deep learning system performance is better than that of trained non-physician graders for the detection of referable glaucomatous optic neuropathy.
Collapse
Affiliation(s)
- Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinxing Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, United States
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xing Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, CA, United States
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- *Correspondence: Mingguang He,
| |
Collapse
|
294
|
Alwakid G, Gouda W, Humayun M, Jhanjhi NZ. Enhancing diabetic retinopathy classification using deep learning. Digit Health 2023; 9:20552076231203676. [PMID: 37766903 PMCID: PMC10521302 DOI: 10.1177/20552076231203676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
Prolonged hyperglycemia can cause diabetic retinopathy (DR), which is a major contributor to blindness. Numerous incidences of DR may be avoided if it were identified and addressed promptly. Throughout recent years, many deep learning (DL)-based algorithms have been proposed to facilitate psychometric testing. Utilizing DL model that encompassed four scenarios, DR and its stages were identified in this study using retinal scans from the "Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 Blindness Detection" dataset. Adopting a DL model then led to the use of augmentation strategies that produced a comprehensive dataset with consistent hyper parameters across all test cases. As a further step in the classification process, we used a Convolutional Neural Network model. Different enhancement methods have been used to raise visual quality. The proposed approach detected the DR with a highest experimental result of 97.83%, a top-2 accuracy of 99.31%, and a top-3 accuracy of 99.88% across all the 5 severity stages of the APTOS 2019 evaluation employing CLAHE and ESRGAN techniques for image enhancement. In addition, we employed APTOS 2019 to develop a set of evaluation metrics (precision, recall, and F1-score) to use in analyzing the efficacy of the suggested model. The proposed approach was also proven to be more efficient at DR location than both state-of-the-art technology and conventional DL.
Collapse
Affiliation(s)
- Ghadah Alwakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah, Al Jouf, Saudi Arabia
| | - Walaa Gouda
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo, Egypt
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah, Al Jouf, Saudi Arabia
| | - NZ Jhanjhi
- School of Computer Science and Engineering (SCE), Taylor's University, Subang Jaya, Malaysia
| |
Collapse
|
295
|
Zhang Z, Wang Y, Zhang H, Samusak A, Rao H, Xiao C, Abula M, Cao Q, Dai Q. Artificial intelligence-assisted diagnosis of ocular surface diseases. Front Cell Dev Biol 2023; 11:1133680. [PMID: 36875760 PMCID: PMC9981656 DOI: 10.3389/fcell.2023.1133680] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/08/2023] [Indexed: 02/19/2023] Open
Abstract
With the rapid development of computer technology, the application of artificial intelligence (AI) in ophthalmology research has gained prominence in modern medicine. Artificial intelligence-related research in ophthalmology previously focused on the screening and diagnosis of fundus diseases, particularly diabetic retinopathy, age-related macular degeneration, and glaucoma. Since fundus images are relatively fixed, their standards are easy to unify. Artificial intelligence research related to ocular surface diseases has also increased. The main issue with research on ocular surface diseases is that the images involved are complex, with many modalities. Therefore, this review aims to summarize current artificial intelligence research and technologies used to diagnose ocular surface diseases such as pterygium, keratoconus, infectious keratitis, and dry eye to identify mature artificial intelligence models that are suitable for research of ocular surface diseases and potential algorithms that may be used in the future.
Collapse
Affiliation(s)
- Zuhui Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Ying Wang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Hongzhen Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Arzigul Samusak
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Huimin Rao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Chun Xiao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Muhetaer Abula
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Qi Dai
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
296
|
Boissin C. Clinical decision-support for acute burn referral and triage at specialized centres - Contribution from routine and digital health tools. Glob Health Action 2022; 15:2067389. [PMID: 35762795 PMCID: PMC9246103 DOI: 10.1080/16549716.2022.2067389] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Specialized care is crucial for severe burn injuries whereas minor burns should be handled at point-of-care. Misdiagnosis is common which leads to overburdening the system and to a lack of treatment for others due to resources shortage. OBJECTIVES The overarching aim was to evaluate four decision-support tools for diagnosis, referral, and triage of acute burns injuries in South Africa and Sweden: referral criteria, mortality prediction scores, image-based remote consultation and automated diagnosis. METHODS Study I retrospectively assessed adherence to referral criteria of 1165 patients admitted to the paediatric burns centre of the Western Cape of South Africa. Study II assessed mortality prediction of 372 patients admitted to the adults burns centre by evaluating an existing score (ABSI), and by using logistic regression. In study III, an online survey was used to assess the diagnostic accuracy of burn experts' image-based estimations using their smartphone or tablet. In study IV, two deep-learning algorithms were developed using 1105 acute burn images in order to identify the burn, and to classify burn depth. RESULTS Adherence to referral criteria was of 93.4%, and the age and severity criteria were associated with patient care. In adults, the ABSI score was a good predictor of mortality which affected a fifth of the patients and which was associated with gender, burn size and referral status. Experts were able to diagnose burn size, and burn depth using handheld devices. Finally, both a wound identifier and a depth classifier algorithm could be developed with relatively high accuracy. CONCLUSIONS Altogether the findings inform on the use of four tools along the care trajectory of patients with acute burns by assisting with the diagnosis, referral and triage from point-of-care to burns centres. This will assist with reducing inequities by improving access to the most appropriate care for patients.
Collapse
Affiliation(s)
- Constance Boissin
- Department of Global Public Health, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
297
|
The Need for Artificial Intelligence Based Risk Factor Analysis for Age-Related Macular Degeneration: A Review. Diagnostics (Basel) 2022; 13:diagnostics13010130. [PMID: 36611422 PMCID: PMC9818762 DOI: 10.3390/diagnostics13010130] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/16/2022] [Accepted: 12/22/2022] [Indexed: 01/04/2023] Open
Abstract
In epidemiology, a risk factor is a variable associated with increased disease risk. Understanding the role of risk factors is significant for developing a strategy to improve global health. There is strong evidence that risk factors like smoking, alcohol consumption, previous cataract surgery, age, high-density lipoprotein (HDL) cholesterol, BMI, female gender, and focal hyper-pigmentation are independently associated with age-related macular degeneration (AMD). Currently, in the literature, statistical techniques like logistic regression, multivariable logistic regression, etc., are being used to identify AMD risk factors by employing numerical/categorical data. However, artificial intelligence (AI) techniques have not been used so far in the literature for identifying risk factors for AMD. On the other hand, artificial intelligence (AI) based tools can anticipate when a person is at risk of developing chronic diseases like cancer, dementia, asthma, etc., in providing personalized care. AI-based techniques can employ numerical/categorical and/or image data thus resulting in multimodal data analysis, which provides the need for AI-based tools to be used for risk factor analysis in ophthalmology. This review summarizes the statistical techniques used to identify various risk factors and the higher benefits that AI techniques provide for AMD-related disease prediction. Additional studies are required to review different techniques for risk factor identification for other ophthalmic diseases like glaucoma, diabetic macular edema, retinopathy of prematurity, cataract, and diabetic retinopathy.
Collapse
|
298
|
Anton N, Doroftei B, Curteanu S, Catãlin L, Ilie OD, Târcoveanu F, Bogdănici CM. Comprehensive Review on the Use of Artificial Intelligence in Ophthalmology and Future Research Directions. Diagnostics (Basel) 2022; 13:100. [PMID: 36611392 PMCID: PMC9818832 DOI: 10.3390/diagnostics13010100] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/12/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND Having several applications in medicine, and in ophthalmology in particular, artificial intelligence (AI) tools have been used to detect visual function deficits, thus playing a key role in diagnosing eye diseases and in predicting the evolution of these common and disabling diseases. AI tools, i.e., artificial neural networks (ANNs), are progressively involved in detecting and customized control of ophthalmic diseases. The studies that refer to the efficiency of AI in medicine and especially in ophthalmology were analyzed in this review. MATERIALS AND METHODS We conducted a comprehensive review in order to collect all accounts published between 2015 and 2022 that refer to these applications of AI in medicine and especially in ophthalmology. Neural networks have a major role in establishing the demand to initiate preliminary anti-glaucoma therapy to stop the advance of the disease. RESULTS Different surveys in the literature review show the remarkable benefit of these AI tools in ophthalmology in evaluating the visual field, optic nerve, and retinal nerve fiber layer, thus ensuring a higher precision in detecting advances in glaucoma and retinal shifts in diabetes. We thus identified 1762 applications of artificial intelligence in ophthalmology: review articles and research articles (301 pub med, 144 scopus, 445 web of science, 872 science direct). Of these, we analyzed 70 articles and review papers (diabetic retinopathy (N = 24), glaucoma (N = 24), DMLV (N = 15), other pathologies (N = 7)) after applying the inclusion and exclusion criteria. CONCLUSION In medicine, AI tools are used in surgery, radiology, gynecology, oncology, etc., in making a diagnosis, predicting the evolution of a disease, and assessing the prognosis in patients with oncological pathologies. In ophthalmology, AI potentially increases the patient's access to screening/clinical diagnosis and decreases healthcare costs, mainly when there is a high risk of disease or communities face financial shortages. AI/DL (deep learning) algorithms using both OCT and FO images will change image analysis techniques and methodologies. Optimizing these (combined) technologies will accelerate progress in this area.
Collapse
Affiliation(s)
- Nicoleta Anton
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Bogdan Doroftei
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Silvia Curteanu
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Lisa Catãlin
- Department of Chemical Engineering, Cristofor Simionescu Faculty of Chemical Engineering and Environmental Protection, Gheorghe Asachi Technical University, Prof.dr.doc Dimitrie Mangeron Avenue, No 67, 700050 Iasi, Romania
| | - Ovidiu-Dumitru Ilie
- Department of Biology, Faculty of Biology, “Alexandru Ioan Cuza” University, Carol I Avenue, No 20A, 700505 Iasi, Romania
| | - Filip Târcoveanu
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| | - Camelia Margareta Bogdănici
- Faculty of Medicine, University of Medicine and Pharmacy “Grigore T. Popa”, University Street, No 16, 700115 Iasi, Romania
| |
Collapse
|
299
|
Gibertoni G, Borghi G, Rovati L. Vision-Based Eye Image Classification for Ophthalmic Measurement Systems. SENSORS (BASEL, SWITZERLAND) 2022; 23:386. [PMID: 36616983 PMCID: PMC9823474 DOI: 10.3390/s23010386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 06/17/2023]
Abstract
The accuracy and the overall performances of ophthalmic instrumentation, where specific analysis of eye images is involved, can be negatively influenced by invalid or incorrect frames acquired during everyday measurements of unaware or non-collaborative human patients and non-technical operators. Therefore, in this paper, we investigate and compare the adoption of several vision-based classification algorithms belonging to different fields, i.e., Machine Learning, Deep Learning, and Expert Systems, in order to improve the performance of an ophthalmic instrument designed for the Pupillary Light Reflex measurement. To test the implemented solutions, we collected and publicly released PopEYE as one of the first datasets consisting of 15 k eye images belonging to 22 different subjects acquired through the aforementioned specialized ophthalmic device. Finally, we discuss the experimental results in terms of classification accuracy of the eye status, as well as computational load analysis, since the proposed solution is designed to be implemented in embedded boards, which have limited hardware resources in computational power and memory size.
Collapse
Affiliation(s)
- Giovanni Gibertoni
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy
| | - Guido Borghi
- Department of Computer Science and Engineering, University of Bologna, 40126 Bologna, Italy
| | - Luigi Rovati
- Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, 41125 Modena, Italy
| |
Collapse
|
300
|
Bhimavarapu U, Battineni G. Deep Learning for the Detection and Classification of Diabetic Retinopathy with an Improved Activation Function. Healthcare (Basel) 2022; 11:healthcare11010097. [PMID: 36611557 PMCID: PMC9819317 DOI: 10.3390/healthcare11010097] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/23/2022] [Accepted: 12/26/2022] [Indexed: 12/30/2022] Open
Abstract
Diabetic retinopathy (DR) is an eye disease triggered due to diabetes, which may lead to blindness. To prevent diabetic patients from becoming blind, early diagnosis and accurate detection of DR are vital. Deep learning models, such as convolutional neural networks (CNNs), are largely used in DR detection through the classification of blood vessel pixels from the remaining pixels. In this paper, an improved activation function was proposed for diagnosing DR from fundus images that automatically reduces loss and processing time. The DIARETDB0, DRIVE, CHASE, and Kaggle datasets were used to train and test the enhanced activation function in the different CNN models. The ResNet-152 model has the highest accuracy of 99.41% with the Kaggle dataset. This enhanced activation function is suitable for DR diagnosis from retinal fundus images.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaramm 522302, Andhra Pradesh, India
| | - Gopi Battineni
- Medical Informatics Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- Correspondence: ; Tel.: +39-333-172-8206
| |
Collapse
|