1
|
Lepetit-Aimon G, Playout C, Boucher MC, Duval R, Brent MH, Cheriet F. MAPLES-DR: MESSIDOR Anatomical and Pathological Labels for Explainable Screening of Diabetic Retinopathy. Sci Data 2024; 11:914. [PMID: 39179588 PMCID: PMC11343847 DOI: 10.1038/s41597-024-03739-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 08/05/2024] [Indexed: 08/26/2024] Open
Abstract
Reliable automatic diagnosis of Diabetic Retinopathy (DR) and Macular Edema (ME) is an invaluable asset in improving the rate of monitored patients among at-risk populations and in enabling earlier treatments before the pathology progresses and threatens vision. However, the explainability of screening models is still an open question, and specifically designed datasets are required to support the research. We present MAPLES-DR (MESSIDOR Anatomical and Pathological Labels for Explainable Screening of Diabetic Retinopathy), which contains, for 198 images of the MESSIDOR public fundus dataset, new diagnoses for DR and ME as well as new pixel-wise segmentation maps for 10 anatomical and pathological biomarkers related to DR. This paper documents the design choices and the annotation procedure that produced MAPLES-DR, discusses the interobserver variability and the overall quality of the annotations, and provides guidelines on using the dataset in a machine learning context.
Collapse
Affiliation(s)
- Gabriel Lepetit-Aimon
- Department of Computer and Software Engineering, Polytechnique Montréal, Montréal, QC, Canada.
| | - Clément Playout
- Department of Ophthalmology, Université de Montréal, Montréal, Canada
- Centre Universitaire d'Ophtalmologie, Hôpital Maisonneuve-Rosemont, Montréal, Canada
| | - Marie Carole Boucher
- Department of Ophthalmology, Université de Montréal, Montréal, Canada
- Centre Universitaire d'Ophtalmologie, Hôpital Maisonneuve-Rosemont, Montréal, Canada
| | - Renaud Duval
- Department of Ophthalmology, Université de Montréal, Montréal, Canada
- Centre Universitaire d'Ophtalmologie, Hôpital Maisonneuve-Rosemont, Montréal, Canada
| | - Michael H Brent
- Department of Ophthalmology and Vision Science, University of Toronto, Toronto, Canada
| | - Farida Cheriet
- Department of Computer and Software Engineering, Polytechnique Montréal, Montréal, QC, Canada
| |
Collapse
|
2
|
Yun C, Tang F, Gao Z, Wang W, Bai F, Miller JD, Liu H, Lee Y, Lou Q. Construction of Risk Prediction Model of Type 2 Diabetic Kidney Disease Based on Deep Learning. Diabetes Metab J 2024; 48:771-779. [PMID: 38685670 PMCID: PMC11307115 DOI: 10.4093/dmj.2023.0033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/27/2023] [Indexed: 05/02/2024] Open
Abstract
BACKGRUOUND This study aimed to develop a diabetic kidney disease (DKD) prediction model using long short term memory (LSTM) neural network and evaluate its performance using accuracy, precision, recall, and area under the curve (AUC) of the receiver operating characteristic (ROC) curve. METHODS The study identified DKD risk factors through literature review and physician focus group, and collected 7 years of data from 6,040 type 2 diabetes mellitus patients based on the risk factors. Pytorch was used to build the LSTM neural network, with 70% of the data used for training and the other 30% for testing. Three models were established to examine the impact of glycosylated hemoglobin (HbA1c), systolic blood pressure (SBP), and pulse pressure (PP) variabilities on the model's performance. RESULTS The developed model achieved an accuracy of 83% and an AUC of 0.83. When the risk factor of HbA1c variability, SBP variability, or PP variability was removed one by one, the accuracy of each model was significantly lower than that of the optimal model, with an accuracy of 78% (P<0.001), 79% (P<0.001), and 81% (P<0.001), respectively. The AUC of ROC was also significantly lower for each model, with values of 0.72 (P<0.001), 0.75 (P<0.001), and 0.77 (P<0.05). CONCLUSION The developed DKD risk predictive model using LSTM neural networks demonstrated high accuracy and AUC value. When HbA1c, SBP, and PP variabilities were added to the model as featured characteristics, the model's performance was greatly improved.
Collapse
Affiliation(s)
- Chuan Yun
- Department of Endocrinology, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Fangli Tang
- International School of Nursing, Hainan Medical University, Haikou, China
| | - Zhenxiu Gao
- School of International Education, Nanjing Medical University, Nanjing, China
| | - Wenjun Wang
- Department of Endocrinology, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Fang Bai
- Nursing Department 531, The First Affiliated Hospital of Hainan Medical University, Haikou, China
| | - Joshua D. Miller
- Department of Medicine, Division of Endocrinology & Metabolism, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Huanhuan Liu
- Department of Endocrinology, Hainan General Hospital, Haikou, China
| | | | - Qingqing Lou
- The First Affiliated Hospital of Hainan Medical University, Hainan Clinical Research Center for Metabolic Disease, Haikou, China
| |
Collapse
|
3
|
Bhati A, Gour N, Khanna P, Ojha A, Werghi N. An interpretable dual attention network for diabetic retinopathy grading: IDANet. Artif Intell Med 2024; 149:102782. [PMID: 38462283 DOI: 10.1016/j.artmed.2024.102782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 03/12/2024]
Abstract
Diabetic retinopathy (DR) is the most prevalent cause of visual impairment in adults worldwide. Typically, patients with DR do not show symptoms until later stages, by which time it may be too late to receive effective treatment. DR Grading is challenging because of the small size and variation in lesion patterns. The key to fine-grained DR grading is to discover more discriminating elements such as cotton wool, hard exudates, hemorrhages, microaneurysms etc. Although deep learning models like convolutional neural networks (CNN) seem ideal for the automated detection of abnormalities in advanced clinical imaging, small-size lesions are very hard to distinguish by using traditional networks. This work proposes a bi-directional spatial and channel-wise parallel attention based network to learn discriminative features for diabetic retinopathy grading. The proposed attention block plugged with a backbone network helps to extract features specific to fine-grained DR-grading. This scheme boosts classification performance along with the detection of small-sized lesion parts. Extensive experiments are performed on four widely used benchmark datasets for DR grading, and performance is evaluated on different quality metrics. Also, for model interpretability, activation maps are generated using the LIME method to visualize the predicted lesion parts. In comparison with state-of-the-art methods, the proposed IDANet exhibits better performance for DR grading and lesion detection.
Collapse
Affiliation(s)
- Amit Bhati
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Neha Gour
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Pritee Khanna
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India.
| | - Aparajita Ojha
- PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Naoufel Werghi
- Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
4
|
Karlin J, Gai L, LaPierre N, Danesh K, Farajzadeh J, Palileo B, Taraszka K, Zheng J, Wang W, Eskin E, Rootman D. Ensemble neural network model for detecting thyroid eye disease using external photographs. Br J Ophthalmol 2023; 107:1722-1729. [PMID: 36126104 DOI: 10.1136/bjo-2022-321833] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/22/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE To describe an artificial intelligence platform that detects thyroid eye disease (TED). DESIGN Development of a deep learning model. METHODS 1944 photographs from a clinical database were used to train a deep learning model. 344 additional images ('test set') were used to calculate performance metrics. Receiver operating characteristic, precision-recall curves and heatmaps were generated. From the test set, 50 images were randomly selected ('survey set') and used to compare model performance with ophthalmologist performance. 222 images obtained from a separate clinical database were used to assess model recall and to quantitate model performance with respect to disease stage and grade. RESULTS The model achieved test set accuracy of 89.2%, specificity 86.9%, recall 93.4%, precision 79.7% and an F1 score of 86.0%. Heatmaps demonstrated that the model identified pixels corresponding to clinical features of TED. On the survey set, the ensemble model achieved accuracy, specificity, recall, precision and F1 score of 86%, 84%, 89%, 77% and 82%, respectively. 27 ophthalmologists achieved mean performance of 75%, 82%, 63%, 72% and 66%, respectively. On the second test set, the model achieved recall of 91.9%, with higher recall for moderate to severe (98.2%, n=55) and active disease (98.3%, n=60), as compared with mild (86.8%, n=68) or stable disease (85.7%, n=63). CONCLUSIONS The deep learning classifier is a novel approach to identify TED and is a first step in the development of tools to improve diagnostic accuracy and lower barriers to specialist evaluation.
Collapse
Affiliation(s)
- Justin Karlin
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Lisa Gai
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Nathan LaPierre
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Kayla Danesh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Justin Farajzadeh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Bea Palileo
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Kodi Taraszka
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Jie Zheng
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Wei Wang
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Eleazar Eskin
- Department of Computer Science, University of California, Los Angeles, California, USA
- Department of Human Genetics, University of California, Los Angeles, California, USA
| | - Daniel Rootman
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| |
Collapse
|
5
|
Skuban-Eiseler T, Orzechowski M, Denkinger M, Kocar TD, Leinert C, Steger F. Artificial Intelligence-Based Clinical Decision Support Systems in Geriatrics: An Ethical Analysis. J Am Med Dir Assoc 2023; 24:1271-1276.e4. [PMID: 37453451 DOI: 10.1016/j.jamda.2023.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 06/08/2023] [Accepted: 06/08/2023] [Indexed: 07/18/2023]
Abstract
OBJECTIVES To provide an ethical analysis of the implications of the usage of artificial intelligence-supported clinical decision support systems (AI-CDSS) in geriatrics. DESIGN Ethical analysis based on the normative arguments regarding the use of AI-CDSS in geriatrics using a principle-based ethical framework. SETTING AND PARTICIPANTS Normative arguments identified in 29 articles on AI-CDSS in geriatrics. METHODS Our analysis is based on a literature search that was done to determine ethical arguments that are currently discussed regarding AI-CDSS. The relevant articles were subjected to a detailed qualitative analysis regarding the ethical considerations Supplementary Datamentioned therein. We then discussed the identified arguments within the frame of the 4 principles of medical ethics according to Beauchamp and Childress and with respect to the needs of frail older adults. RESULTS We found a total of 5089 articles; 29 articles met the inclusion criteria and were subsequently subjected to a detailed qualitative analysis. We could not identify any systematic analysis of the ethical implications of AI-CDSS in geriatrics. The ethical considerations are very unsystematic and scattered, and the existing literature has a predominantly technical focus emphasizing the technology's utility. In an extensive ethical analysis, we systematically discuss the ethical implications of the usage of AI-CDSS in geriatrics. CONCLUSIONS AND IMPLICATIONS AI-CDSS in geriatrics can be a great asset, especially when dealing with patients with cognitive disorders; however, from an ethical perspective, we see the need for further research. By using AI-CDSS, older patients' values and beliefs might be overlooked, and the quality of the doctor-patient relationship might be altered, endangering compliance to the 4 ethical principles of Beauchamp and Childress.
Collapse
Affiliation(s)
- Tobias Skuban-Eiseler
- Institute of the History, Philosophy and Ethics of Medicine, Faculty of Medicine, Ulm University, Ulm, Germany; kbo-Isar-Amper-Klinikum Region München, München-Haar, Germany.
| | - Marcin Orzechowski
- Institute of the History, Philosophy and Ethics of Medicine, Faculty of Medicine, Ulm University, Ulm, Germany
| | - Michael Denkinger
- Institute of Geriatric Research, Ulm University Medical Center, Ulm, Germany; AGAPLESION Bethesda Clinic Ulm, Ulm, Germany
| | - Thomas Derya Kocar
- Institute of Geriatric Research, Ulm University Medical Center, Ulm, Germany; AGAPLESION Bethesda Clinic Ulm, Ulm, Germany
| | - Christoph Leinert
- Institute of Geriatric Research, Ulm University Medical Center, Ulm, Germany; AGAPLESION Bethesda Clinic Ulm, Ulm, Germany
| | - Florian Steger
- Institute of the History, Philosophy and Ethics of Medicine, Faculty of Medicine, Ulm University, Ulm, Germany
| |
Collapse
|
6
|
He S, Bulloch G, Zhang L, Meng W, Shi D, He M. Comparing Common Retinal Vessel Caliber Measurement Software with an Automatic Deep Learning System. Curr Eye Res 2023; 48:843-849. [PMID: 37246501 DOI: 10.1080/02713683.2023.2212881] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 04/13/2023] [Accepted: 05/05/2023] [Indexed: 05/30/2023]
Abstract
PURPOSE To compare the Retina-based Microvascular Health Assessment System (RMHAS) with Integrative Vessel Analysis (IVAN) for retinal vessel caliber measurement. METHODS Eligible fundus photographs from the Lingtou Eye Cohort Study were obtained alongside their corresponding participant data. Vascular diameter was automatically measured using IVAN and RMHAS software, and intersoftware variations were assessed by intra-class correlation coefficients (ICC), and 95% confidence intervals (CIs). Scatterplots and Bland-Altman plots assessed the agreement between programs, and a Pearson's correlation test assessed the strength of associations between systemic variables and retinal calibers. An algorithm was proposed to convert measurements between software for interchangeability. RESULTS ICCs between IVAN and RMHAS were moderate for CRAE and AVR (ICC; 95%CI)(0.62; 0.60 to 0.63 and 0.42; 0.40 to 0.44 respectively) and excellent for CRVE (0.76; 0.75 to 0.77). When comparing retinal vascular calibre measurements between tools, mean differences (MD, 95% confidence intervals) in CRAE, CRVE, and AVR were 22.34 (-7.29 to 51.97 µm),-7.01 (-37.68 to 23.67 µm), and 0.12 (-0.02 to 0.26 µm), respectively. The correlation of systemic parameters with CRAE/CRVE was poor and the correlation of CRAE with age, sex, systolic blood pressure, and CRVE with age, sex, and serum glucose were significantly different between IVAN and RMHAS (p < 0.05). CONCLUSIONS CRAE and AVR correlated moderately between retinal measurement software systems while CRVE correlated well. Further studies confirming this agreeability and interchangeability in large-scale datasets are needed before softwares are deemed comparable in clinical practice.
Collapse
Affiliation(s)
- Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Gabriella Bulloch
- University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, East Melbourne, Victoria, Australia
| | - Liangxin Zhang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Wei Meng
- Eyetelligence Ltd, Melbourne, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, East Melbourne, Victoria, Australia
- Eyetelligence Ltd, Melbourne, Australia
| |
Collapse
|
7
|
Wroblewski JJ, Sanchez-Buenfil E, Inciarte M, Berdia J, Blake L, Wroblewski S, Patti A, Suter G, Sanborn GE. Diabetic Retinopathy Screening Using Smartphone-Based Fundus Photography and Deep-Learning Artificial Intelligence in the Yucatan Peninsula: A Field Study. J Diabetes Sci Technol 2023:19322968231194644. [PMID: 37641576 DOI: 10.1177/19322968231194644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
BACKGROUND To compare the performance of Medios (offline) and EyeArt (online) artificial intelligence (AI) algorithms for detecting diabetic retinopathy (DR) on images captured using fundus-on-smartphone photography in a remote outreach field setting. METHODS In June, 2019 in the Yucatan Peninsula, 248 patients, many of whom had chronic visual impairment, were screened for DR using two portable Remidio fundus-on-phone cameras, and 2130 images obtained were analyzed, retrospectively, by Medios and EyeArt. Screening performance metrics also were determined retrospectively using masked image analysis combined with clinical examination results as the reference standard. RESULTS A total of 129 patients were determined to have some level of DR; 119 patients had no DR. Medios was capable of evaluating every patient with a sensitivity (95% confidence intervals [CIs]) of 94% (88%-97%) and specificity of 94% (88%-98%). Owing primarily to photographer error, EyeArt evaluated 156 patients with a sensitivity of 94% (86%-98%) and specificity of 86% (77%-93%). In a head-to-head comparison of 110 patients, the sensitivities of Medios and EyeArt were 99% (93%-100%) and 95% (87%-99%). The specificities for both were 88% (73%-97%). CONCLUSIONS Medios and EyeArt AI algorithms demonstrated high levels of sensitivity and specificity for detecting DR when applied in this real-world field setting. Both programs should be considered in remote, large-scale DR screening campaigns where immediate results are desirable, and in the case of EyeArt, online access is possible.
Collapse
Affiliation(s)
- John J Wroblewski
- Retina Care International, Hagerstown, MD, USA
- Cumberland Valley Retina Consultants, Hagerstown, MD, USA
| | | | | | - Jay Berdia
- Cumberland Valley Retina Consultants, Hagerstown, MD, USA
| | - Lewis Blake
- Department of Applied Mathematics and Statistics, Colorado School of Mines, Golden, CO, USA
| | | | | | - Gretchen Suter
- Cumberland Valley Retina Consultants, Hagerstown, MD, USA
| | - George E Sanborn
- Department of Ophthalmology, Virginia Commonwealth University, Richmond, VA, USA
| |
Collapse
|
8
|
Berk A, Ozturan G, Delavari P, Maberley D, Yılmaz Ö, Oruc I. Learning from small data: Classifying sex from retinal images via deep learning. PLoS One 2023; 18:e0289211. [PMID: 37535591 PMCID: PMC10399793 DOI: 10.1371/journal.pone.0289211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 07/14/2023] [Indexed: 08/05/2023] Open
Abstract
Deep learning (DL) techniques have seen tremendous interest in medical imaging, particularly in the use of convolutional neural networks (CNNs) for the development of automated diagnostic tools. The facility of its non-invasive acquisition makes retinal fundus imaging particularly amenable to such automated approaches. Recent work in the analysis of fundus images using CNNs relies on access to massive datasets for training and validation, composed of hundreds of thousands of images. However, data residency and data privacy restrictions stymie the applicability of this approach in medical settings where patient confidentiality is a mandate. Here, we showcase results for the performance of DL on small datasets to classify patient sex from fundus images-a trait thought not to be present or quantifiable in fundus images until recently. Specifically, we fine-tune a Resnet-152 model whose last layer has been modified to a fully-connected layer for binary classification. We carried out several experiments to assess performance in the small dataset context using one private (DOVS) and one public (ODIR) data source. Our models, developed using approximately 2500 fundus images, achieved test AUC scores of up to 0.72 (95% CI: [0.67, 0.77]). This corresponds to a mere 25% decrease in performance despite a nearly 1000-fold decrease in the dataset size compared to prior results in the literature. Our results show that binary classification, even with a hard task such as sex categorization from retinal fundus images, is possible with very small datasets. Our domain adaptation results show that models trained with one distribution of images may generalize well to an independent external source, as in the case of models trained on DOVS and tested on ODIR. Our results also show that eliminating poor quality images may hamper training of the CNN due to reducing the already small dataset size even further. Nevertheless, using high quality images may be an important factor as evidenced by superior generalizability of results in the domain adaptation experiments. Finally, our work shows that ensembling is an important tool in maximizing performance of deep CNNs in the context of small development datasets.
Collapse
Affiliation(s)
- Aaron Berk
- Department of Mathematics & Statistics, McGill University, Montréal, Canada
| | - Gulcenur Ozturan
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Parsa Delavari
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - David Maberley
- Department of Ophthalmology, University of Ottawa, Ottawa, Canada
| | - Özgür Yılmaz
- Department of Mathematics, University of British Columbia, Vancouver, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
9
|
Discriminating Healthy Optic Discs and Visible Optic Disc Drusen on Fundus Autofluorescence and Color Fundus Photography Using Deep Learning-A Pilot Study. J Clin Med 2023; 12:jcm12051951. [PMID: 36902737 PMCID: PMC10003756 DOI: 10.3390/jcm12051951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 02/13/2023] [Accepted: 02/18/2023] [Indexed: 03/06/2023] Open
Abstract
The aim of this study was to use deep learning based on a deep convolutional neural network (DCNN) for automated image classification of healthy optic discs (OD) and visible optic disc drusen (ODD) on fundus autofluorescence (FAF) and color fundus photography (CFP). In this study, a total of 400 FAF and CFP images of patients with ODD and healthy controls were used. A pre-trained multi-layer Deep Convolutional Neural Network (DCNN) was trained and validated independently on FAF and CFP images. Training and validation accuracy and cross-entropy were recorded. Both generated DCNN classifiers were tested with 40 FAF and CFP images (20 ODD and 20 controls). After the repetition of 1000 training cycles, the training accuracy was 100%, the validation accuracy was 92% (CFP) and 96% (FAF), respectively. The cross-entropy was 0.04 (CFP) and 0.15 (FAF). The sensitivity, specificity, and accuracy of the DCNN for classification of FAF images was 100%. For the DCNN used to identify ODD on color fundus photographs, sensitivity was 85%, specificity 100%, and accuracy 92.5%. Differentiation between healthy controls and ODD on CFP and FAF images was possible with high specificity and sensitivity using a deep learning approach.
Collapse
|
10
|
Amin MS, Ahn H. FabNet: A Features Agglomeration-Based Convolutional Neural Network for Multiscale Breast Cancer Histopathology Images Classification. Cancers (Basel) 2023; 15:cancers15041013. [PMID: 36831359 PMCID: PMC9954749 DOI: 10.3390/cancers15041013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 01/31/2023] [Accepted: 01/31/2023] [Indexed: 02/08/2023] Open
Abstract
The definitive diagnosis of histology specimen images is largely based on the radiologist's comprehensive experience; however, due to the fine to the coarse visual appearance of such images, experts often disagree with their assessments. Sophisticated deep learning approaches can help to automate the diagnosis process of the images and reduce the analysis duration. More efficient and accurate automated systems can also increase the diagnostic impartiality by reducing the difference between the operators. We propose a FabNet model that can learn the fine-to-coarse structural and textural features of multi-scale histopathological images by using accretive network architecture that agglomerate hierarchical feature maps to acquire significant classification accuracy. We expand on a contemporary design by incorporating deep and close integration to finely combine features across layers. Our deep layer accretive model structure combines the feature hierarchy in an iterative and hierarchically manner that infers higher accuracy and fewer parameters. The FabNet can identify malignant tumors from images and patches from histopathology images. We assessed the efficiency of our suggested model standard cancer datasets, which included breast cancer as well as colon cancer histopathology images. Our proposed avant garde model significantly outperforms existing state-of-the-art models in respect of the accuracy, F1 score, precision, and sensitivity, with fewer parameters.
Collapse
|
11
|
Chen D, Ran Ran A, Fang Tan T, Ramachandran R, Li F, Cheung CY, Yousefi S, Tham CCY, Ting DSW, Zhang X, Al-Aswad LA. Applications of Artificial Intelligence and Deep Learning in Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:80-93. [PMID: 36706335 DOI: 10.1097/apo.0000000000000596] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 12/06/2022] [Indexed: 01/28/2023] Open
Abstract
Diagnosis and detection of progression of glaucoma remains challenging. Artificial intelligence-based tools have the potential to improve and standardize the assessment of glaucoma but development of these algorithms is difficult given the multimodal and variable nature of the diagnosis. Currently, most algorithms are focused on a single imaging modality, specifically screening and diagnosis based on fundus photos or optical coherence tomography images. Use of anterior segment optical coherence tomography and goniophotographs is limited. The majority of algorithms designed for disease progression prediction are based on visual fields. No studies in our literature search assessed the use of artificial intelligence for treatment response prediction and no studies conducted prospective testing of their algorithms. Additional challenges to the development of artificial intelligence-based tools include scarcity of data and a lack of consensus in diagnostic criteria. Although research in the use of artificial intelligence for glaucoma is promising, additional work is needed to develop clinically usable tools.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York City, NY
- Genentech Inc, South San Francisco, CA
| | - An Ran Ran
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Ting Fang Tan
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
| | | | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Siamak Yousefi
- Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, TN
| | - Clement C Y Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
- Lam Kin Chung, Jet King-Shing Ho Glaucoma Treatment And Research Centre, The Chinese University of Hong Kong, Hong Kong, China
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore
- Singapore National Eye Center, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | | |
Collapse
|
12
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
13
|
Zou X, Liu Y, Ji L. Review: Machine learning in precision pharmacotherapy of type 2 diabetes-A promising future or a glimpse of hope? Digit Health 2023; 9:20552076231203879. [PMID: 37786401 PMCID: PMC10541760 DOI: 10.1177/20552076231203879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 09/08/2023] [Indexed: 10/04/2023] Open
Abstract
Precision pharmacotherapy of diabetes requires judicious selection of the optimal therapeutic agent for individual patients. Artificial intelligence (AI), a swiftly expanding discipline, holds substantial potential to transform current practices in diabetes diagnosis and management. This manuscript provides a comprehensive review of contemporary research investigating drug responses in patient subgroups, stratified via either supervised or unsupervised machine learning approaches. The prevalent algorithmic workflow for investigating drug responses using machine learning involves cohort selection, data processing, predictor selection, development and validation of machine learning methods, subgroup allocation, and subsequent analysis of drug response. Despite the promising feature, current research does not yet provide sufficient evidence to implement machine learning algorithms into routine clinical practice, due to a lack of simplicity, validation, or demonstrated efficacy. Nevertheless, we anticipate that the evolving evidence base will increasingly substantiate the role of machine learning in molding precision pharmacotherapy for diabetes.
Collapse
Affiliation(s)
- Xiantong Zou
- Xiantong Zou, Department of Endocrinology and Metabolism, Peking University People's Hospital, Beijing, 100044, China.
| | | | - Linong Ji
- Linong Ji, Department of Endocrinology and Metabolism, Peking University People's Hospital, Beijing, 100044, China.
| |
Collapse
|
14
|
Ji Y, Liu S, Hong X, Lu Y, Wu X, Li K, Li K, Liu Y. Advances in artificial intelligence applications for ocular surface diseases diagnosis. Front Cell Dev Biol 2022; 10:1107689. [PMID: 36605721 PMCID: PMC9808405 DOI: 10.3389/fcell.2022.1107689] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 12/05/2022] [Indexed: 01/07/2023] Open
Abstract
In recent years, with the rapid development of computer technology, continual optimization of various learning algorithms and architectures, and establishment of numerous large databases, artificial intelligence (AI) has been unprecedentedly developed and applied in the field of ophthalmology. In the past, ophthalmological AI research mainly focused on posterior segment diseases, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, retinal vein occlusion, and glaucoma optic neuropathy. Meanwhile, an increasing number of studies have employed AI to diagnose ocular surface diseases. In this review, we summarize the research progress of AI in the diagnosis of several ocular surface diseases, namely keratitis, keratoconus, dry eye, and pterygium. We discuss the limitations and challenges of AI in the diagnosis of ocular surface diseases, as well as prospects for the future.
Collapse
Affiliation(s)
- Yuke Ji
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Sha Liu
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Yi Lu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Xingyang Wu
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Yunfang Liu, ; Keran Li, ; Kunke Li,
| |
Collapse
|
15
|
Nilay A, Thool AR. A Review of Pathogenesis and Risk Factors of Diabetic Retinopathy With Emphasis on Screening Techniques. Cureus 2022; 14:e31062. [DOI: 10.7759/cureus.31062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 11/03/2022] [Indexed: 11/05/2022] Open
|
16
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
17
|
Zhou X, Wang H, Feng C, Xu R, He Y, Li L, Tu C. Emerging Applications of Deep Learning in Bone Tumors: Current Advances and Challenges. Front Oncol 2022; 12:908873. [PMID: 35928860 PMCID: PMC9345628 DOI: 10.3389/fonc.2022.908873] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
Deep learning is a subfield of state-of-the-art artificial intelligence (AI) technology, and multiple deep learning-based AI models have been applied to musculoskeletal diseases. Deep learning has shown the capability to assist clinical diagnosis and prognosis prediction in a spectrum of musculoskeletal disorders, including fracture detection, cartilage and spinal lesions identification, and osteoarthritis severity assessment. Meanwhile, deep learning has also been extensively explored in diverse tumors such as prostate, breast, and lung cancers. Recently, the application of deep learning emerges in bone tumors. A growing number of deep learning models have demonstrated good performance in detection, segmentation, classification, volume calculation, grading, and assessment of tumor necrosis rate in primary and metastatic bone tumors based on both radiological (such as X-ray, CT, MRI, SPECT) and pathological images, implicating a potential for diagnosis assistance and prognosis prediction of deep learning in bone tumors. In this review, we first summarized the workflows of deep learning methods in medical images and the current applications of deep learning-based AI for diagnosis and prognosis prediction in bone tumors. Moreover, the current challenges in the implementation of the deep learning method and future perspectives in this field were extensively discussed.
Collapse
Affiliation(s)
- Xiaowen Zhou
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hua Wang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Chengyao Feng
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Ruilin Xu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Yu He
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Lan Li
- Department of Pathology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Chao Tu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Chao Tu,
| |
Collapse
|
18
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
19
|
Necessity of Local Modification for Deep Learning Algorithms to Predict Diabetic Retinopathy. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031204. [PMID: 35162226 PMCID: PMC8834743 DOI: 10.3390/ijerph19031204] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/12/2022] [Accepted: 01/18/2022] [Indexed: 11/16/2022]
Abstract
Deep learning (DL) algorithms are used to diagnose diabetic retinopathy (DR). However, most of these algorithms have been trained using global data or data from patients of a single region. Using different model architectures (e.g., Inception-v3, ResNet101, and DenseNet121), we assessed the necessity of modifying the algorithms for universal society screening. We used the open-source dataset from the Kaggle Diabetic Retinopathy Detection competition to develop a model for the detection of DR severity. We used a local dataset from Taipei City Hospital to verify the necessity of model localization and validated the three aforementioned models with local datasets. The experimental results revealed that Inception-v3 outperformed ResNet101 and DenseNet121 with a foreign global dataset, whereas DenseNet121 outperformed Inception-v3 and ResNet101 with the local dataset. The quadratic weighted kappa score (κ) was used to evaluate the model performance. All models had 5-8% higher κ for the local dataset than for the foreign dataset. Confusion matrix analysis revealed that, compared with the local ophthalmologists' diagnoses, the severity predicted by the three models was overestimated. Thus, DL algorithms using artificial intelligence based on global data must be locally modified to ensure the applicability of a well-trained model to make diagnoses in clinical environments.
Collapse
|
20
|
Gour N, Tanveer M, Khanna P. Challenges for ocular disease identification in the era of artificial intelligence. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06770-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
21
|
Shao A, Jin K, Li Y, Lou L, Zhou W, Ye J. Overview of global publications on machine learning in diabetic retinopathy from 2011 to 2021: Bibliometric analysis. Front Endocrinol (Lausanne) 2022; 13:1032144. [PMID: 36589855 PMCID: PMC9797582 DOI: 10.3389/fendo.2022.1032144] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To comprehensively analyze and discuss the publications on machine learning (ML) in diabetic retinopathy (DR) following a bibliometric approach. METHODS The global publications on ML in DR from 2011 to 2021 were retrieved from the Web of Science Core Collection (WoSCC) database. We analyzed the publication and citation trend over time and identified highly-cited articles, prolific countries, institutions, journals and the most relevant research domains. VOSviewer and Wordcloud are used to visualize the mainstream research topics and evolution of subtopics in the form of co-occurrence maps of keywords. RESULTS By analyzing a total of 1147 relevant publications, this study found a rapid increase in the number of annual publications, with an average growth rate of 42.68%. India and China were the most productive countries. IEEE Access was the most productive journal in this field. In addition, some notable common points were found in the highly-cited articles. The keywords analysis showed that "diabetic retinopathy", "classification", and "fundus images" were the most frequent keywords for the entire period, as automatic diagnosis of DR was always the mainstream topic in the relevant field. The evolution of keywords highlighted some breakthroughs, including "deep learning" and "optical coherence tomography", indicating the advance in technologies and changes in the research attention. CONCLUSIONS As new research topics have emerged and evolved, studies are becoming increasingly diverse and extensive. Multiple modalities of medical data, new ML techniques and constantly optimized algorithms are the future trends in this multidisciplinary field.
Collapse
Affiliation(s)
- An Shao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
| | - Yunxiang Li
- College of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
| | - Wuyuan Zhou
- Zhejiang Academy of Science and Technology Information, Hangzhou, China
- *Correspondence: Juan Ye, ; Wuyuan Zhou,
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, China
- *Correspondence: Juan Ye, ; Wuyuan Zhou,
| |
Collapse
|
22
|
Mao J, Deng X, Ye Y, Liu H, Fang Y, Zhang Z, Chen N, Sun M, Shen L. Morphological characteristics of retinal vessels in eyes with high myopia: Ultra-wide field images analyzed by artificial intelligence using a transfer learning system. Front Med (Lausanne) 2022; 9:956179. [PMID: 36874950 PMCID: PMC9982751 DOI: 10.3389/fmed.2022.956179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 12/27/2022] [Indexed: 02/18/2023] Open
Abstract
Purpose The purpose of this study is to investigate the retinal vascular morphological characteristics in high myopia patients of different severity. Methods 317 eyes of high myopia patients and 104 eyes of healthy control subjects were included in this study. The severity of high myopia patients is classified into C0-C4 according to the Meta Analysis of the Pathologic Myopia (META-PM) classification and their vascular morphological characteristics in ultra-wide field imaging were analyzed using transfer learning methods and RU-net. Correlation with axial length (AL), best corrected visual acuity (BCVA) and age was analyzed. In addition, the vascular morphological characteristics of myopic choroidal neovascularization (mCNV) patients and their matched high myopia patients were compared. Results The RU-net and transfer learning system of blood vessel segmentation had an accuracy of 98.24%, a sensitivity of 71.42%, a specificity of 99.37%, a precision of 73.68% and a F1 score of 72.29. Compared with healthy control group, high myopia group had smaller vessel angle (31.12 ± 2.27 vs. 32.33 ± 2.14), smaller fractal dimension (Df) (1.383 ± 0.060 vs. 1.424 ± 0.038), smaller vessel density (2.57 ± 0.96 vs. 3.92 ± 0.93) and fewer vascular branches (201.87 ± 75.92 vs. 271.31 ± 67.37), all P < 0.001. With the increase of myopia maculopathy severity, vessel angle, Df, vessel density and vascular branches significantly decreased (all P < 0.001). There were significant correlations of these characteristics with AL, BCVA and age. Patients with mCNV tended to have larger vessel density (P < 0.001) and more vascular branches (P = 0.045). Conclusion The RU-net and transfer learning technology used in this study has an accuracy of 98.24%, thus has good performance in quantitative analysis of vascular morphological characteristics in Ultra-wide field images. Along with the increase of myopic maculopathy severity and the elongation of eyeball, vessel angle, Df, vessel density and vascular branches decreased. Myopic CNV patients have larger vessel density and more vascular branches.
Collapse
Affiliation(s)
- Jianbo Mao
- Department of Ophthalmology, Center for Rehabilitation Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China.,Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Xinyi Deng
- Department of Ophthalmology, Center for Rehabilitation Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China.,Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Yu Ye
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, China
| | - Hui Liu
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, China
| | - Yuyan Fang
- Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Zhengxi Zhang
- Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Nuo Chen
- Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Mingzhai Sun
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei, China
| | - Lijun Shen
- Department of Ophthalmology, Center for Rehabilitation Medicine, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, Zhejiang, China.,Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| |
Collapse
|
23
|
López-Dorado A, Ortiz M, Satue M, Rodrigo MJ, Barea R, Sánchez-Morla EM, Cavaliere C, Rodríguez-Ascariz JM, Orduna-Hospital E, Boquete L, Garcia-Martin E. Early Diagnosis of Multiple Sclerosis Using Swept-Source Optical Coherence Tomography and Convolutional Neural Networks Trained with Data Augmentation. SENSORS (BASEL, SWITZERLAND) 2021; 22:167. [PMID: 35009710 PMCID: PMC8747672 DOI: 10.3390/s22010167] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 05/07/2023]
Abstract
BACKGROUND The aim of this paper is to implement a system to facilitate the diagnosis of multiple sclerosis (MS) in its initial stages. It does so using a convolutional neural network (CNN) to classify images captured with swept-source optical coherence tomography (SS-OCT). METHODS SS-OCT images from 48 control subjects and 48 recently diagnosed MS patients have been used. These images show the thicknesses (45 × 60 points) of the following structures: complete retina, retinal nerve fiber layer, two ganglion cell layers (GCL+, GCL++) and choroid. The Cohen distance is used to identify the structures and the regions within them with greatest discriminant capacity. The original database of OCT images is augmented by a deep convolutional generative adversarial network to expand the CNN's training set. RESULTS The retinal structures with greatest discriminant capacity are the GCL++ (44.99% of image points), complete retina (26.71%) and GCL+ (22.93%). Thresholding these images and using them as inputs to a CNN comprising two convolution modules and one classification module obtains sensitivity = specificity = 1.0. CONCLUSIONS Feature pre-selection and the use of a convolutional neural network may be a promising, nonharmful, low-cost, easy-to-perform and effective means of assisting the early diagnosis of MS based on SS-OCT thickness data.
Collapse
Affiliation(s)
- Almudena López-Dorado
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Miguel Ortiz
- Computer Vision, Imaging and Machine Intelligence Research Group, Interdisciplinary Center for Security, Reliability and Trust (SnT), University of Luxembourg, 4365 Luxembourg, Luxembourg;
| | - María Satue
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| | - María J. Rodrigo
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| | - Rafael Barea
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Eva M. Sánchez-Morla
- Department of Psychiatry, Hospital 12 de Octubre Research Institute (i+12), 28041 Madrid, Spain;
- Faculty of Medicine, Complutense University of Madrid, 28040 Madrid, Spain
- Biomedical Research Networking Centre in Mental Health (CIBERSAM), 28029 Madrid, Spain
| | - Carlo Cavaliere
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - José M. Rodríguez-Ascariz
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Elvira Orduna-Hospital
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| | - Luciano Boquete
- Biomedical Engineering Group, Department of Electronics, University of Alcalá, 28801 Alcalá de Henares, Spain; (A.L.-D.); (R.B.); (C.C.); (J.M.R.-A.)
| | - Elena Garcia-Martin
- Miguel Servet Ophthalmology Innovation and Research Group (GIMSO), Department of Ophthalmology, Aragon Institute for Health Research (IIS Aragon), Miguel Servet University Hospital, University of Zaragoza, 50018 Zaragoza, Spain; (M.S.); (M.J.R.); (E.O.-H.)
| |
Collapse
|
24
|
Zhang YY, Zhao H, Lin JY, Wu SN, Liu XW, Zhang HD, Shao Y, Yang WF. Artificial Intelligence to Detect Meibomian Gland Dysfunction From in-vivo Laser Confocal Microscopy. Front Med (Lausanne) 2021; 8:774344. [PMID: 34901091 PMCID: PMC8655877 DOI: 10.3389/fmed.2021.774344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 11/04/2021] [Indexed: 02/05/2023] Open
Abstract
Background: In recent years, deep learning has been widely used in a variety of ophthalmic diseases. As a common ophthalmic disease, meibomian gland dysfunction (MGD) has a unique phenotype in in-vivo laser confocal microscope imaging (VLCMI). The purpose of our study was to investigate a deep learning algorithm to differentiate and classify obstructive MGD (OMGD), atrophic MGD (AMGD) and normal groups. Methods: In this study, a multi-layer deep convolution neural network (CNN) was trained using VLCMI from OMGD, AMGD and healthy subjects as verified by medical experts. The automatic differential diagnosis of OMGD, AMGD and healthy people was tested by comparing its image-based identification of each group with the medical expert diagnosis. The CNN was trained and validated with 4,985 and 1,663 VLCMI images, respectively. By using established enhancement techniques, 1,663 untrained VLCMI images were tested. Results: In this study, we included 2,766 healthy control VLCMIs, 2,744 from OMGD and 2,801 from AMGD. Of the three models, differential diagnostic accuracy of the DenseNet169 CNN was highest at over 97%. The sensitivity and specificity of the DenseNet169 model for OMGD were 88.8 and 95.4%, respectively; and for AMGD 89.4 and 98.4%, respectively. Conclusion: This study described a deep learning algorithm to automatically check and classify VLCMI images of MGD. By optimizing the algorithm, the classifier model displayed excellent accuracy. With further development, this model may become an effective tool for the differential diagnosis of MGD.
Collapse
Affiliation(s)
- Ye-Ye Zhang
- Department of Electronic Engineering, School of Science, Hainan University, Haikou, China.,Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China
| | - Hui Zhao
- Department of Ophthalmology, Shanghai First People's Hospital, Shanghai Jiao Tong University, National Clinical Research Center for Eye Diseases, Shanghai, China
| | - Jin-Yan Lin
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China
| | - Shi-Nan Wu
- Jiangxi Centre of National Ophthalmology Clinical Research Center, Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Xi-Wang Liu
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China.,Department of Mathematics, College of Science, Shantou University, Shantou, China
| | - Hong-Dan Zhang
- Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China.,Department of Mathematics, College of Science, Shantou University, Shantou, China
| | - Yi Shao
- Jiangxi Centre of National Ophthalmology Clinical Research Center, Department of Ophthalmology, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Wei-Feng Yang
- Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, China.,Research Center for Advanced Optics and Photoelectronics, Department of Physics, College of Science, Shantou University, Shantou, China.,Department of Mathematics, College of Science, Shantou University, Shantou, China
| |
Collapse
|
25
|
Xu W, Jin L, Zhu PZ, He K, Yang WH, Wu MN. Implementation and Application of an Intelligent Pterygium Diagnosis System Based on Deep Learning. Front Psychol 2021; 12:759229. [PMID: 34744935 PMCID: PMC8569253 DOI: 10.3389/fpsyg.2021.759229] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 10/04/2021] [Indexed: 12/27/2022] Open
Abstract
Objective: This study aims to implement and investigate the application of a special intelligent diagnostic system based on deep learning in the diagnosis of pterygium using anterior segment photographs. Methods: A total of 1,220 anterior segment photographs of normal eyes and pterygium patients were collected for training (using 750 images) and testing (using 470 images) to develop an intelligent pterygium diagnostic model. The images were classified into three categories by the experts and the intelligent pterygium diagnosis system: (i) the normal group, (ii) the observation group of pterygium, and (iii) the operation group of pterygium. The intelligent diagnostic results were compared with those of the expert diagnosis. Indicators including accuracy, sensitivity, specificity, kappa value, the area under the receiver operating characteristic curve (AUC), as well as 95% confidence interval (CI) and F1-score were evaluated. Results: The accuracy rate of the intelligent diagnosis system on the 470 testing photographs was 94.68%; the diagnostic consistency was high; the kappa values of the three groups were all above 85%. Additionally, the AUC values approached 100% in group 1 and 95% in the other two groups. The best results generated from the proposed system for sensitivity, specificity, and F1-scores were 100, 99.64, and 99.74% in group 1; 90.06, 97.32, and 92.49% in group 2; and 92.73, 95.56, and 89.47% in group 3, respectively. Conclusion: The intelligent pterygium diagnosis system based on deep learning can not only judge the presence of pterygium but also classify the severity of pterygium. This study is expected to provide a new screening tool for pterygium and benefit patients from areas lacking medical resources.
Collapse
Affiliation(s)
- Wei Xu
- Department of Optometry, Jinling Institute of Technology, Nanjing, China.,Nanjing Key Laboratory of Optometric Materials and Application Technology, Nanjing, China
| | - Ling Jin
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Peng-Zhi Zhu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou, China
| |
Collapse
|
26
|
Conroy D, Ramakrishnan R, Raman R, Rajalakshmi R, Rani PK, Ramasamy K, Mohan V, Das T, Sadanandan R, Netuveli G, Sivaprasad S. The ORNATE India project: Building research capacity and capability to tackle the burden of diabetic retinopathy-related blindness in India. Indian J Ophthalmol 2021; 69:3058-3063. [PMID: 34708742 PMCID: PMC8725136 DOI: 10.4103/ijo.ijo_1505_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The ORNATE India project is an interdisciplinary, multifaceted United Kingdom (UK)–India collaborative study aimed to build research capacity and capability in India and the UK to tackle the burden of diabetes-related visual impairment. For 51 months (October 2017–December 2021), this project built collaboration between six institutions in the UK and seven in India, including the Government of Kerala. Diabetic retinopathy (DR) screening models were evaluated in the public system in Kerala. An epidemiological study of diabetes and its complications was conducted through 20 centers across India covering 10 states and one union territory. The statistical analysis is not yet complete. In the UK, risk models for diabetes and its complications and artificial intelligence-aided tools are being developed. These were complemented by joint studies on various aspects of diabetes between collaborators in the UK and India. This interdisciplinary team enabled increased capability in several workstreams, resulting in an increased number of publications, development of cost-effective risk models, algorithms for risk-based screening, and policy for state-wide implementation of sustainable DR screening and treatment programs in primary care in Kerala. The increase in research capacity included multiple disciplines from field workers, administrators, project managers, project leads, screeners, graders, optometrists, nurses, general practitioners, and research associates in various disciplines. Cross-fertilization of these disciplines enabled the development of several collaborations external to this project. This collaborative project has made a significant impact on research capacity development in both India and the UK.
Collapse
Affiliation(s)
- Dolores Conroy
- Vision Sciences Department, UCL Institute of Ophthalmology, 11-43 Bath St, London, UK
| | - Radha Ramakrishnan
- Vision Sciences Department, UCL Institute of Ophthalmology, 11-43 Bath St, London, UK
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Medical Research Foundation, Sankara Nethralaya, Chennai, India
| | | | - Padmaja Kumari Rani
- Smt. Kanuri Santhamma Centre for Vitreoretinal Diseases, L V Prasad Eye Institute, Hyderabad, India
| | - Kim Ramasamy
- Department of Vitreoretinal Services, Aravind Eye Hospital, Madurai, India
| | - Viswananthan Mohan
- Department of Ophthalmology, Madras Diabetes Research Foundation, Chennai, India
| | - Taraprasad Das
- Smt. Kanuri Santhamma Centre for Vitreoretinal Diseases, L V Prasad Eye Institute, Hyderabad, India
| | - Rajeev Sadanandan
- Chief Executive Officer, Health Systems Transformation Platform, SID Campus, Qutab Institutional Area, New Delhi, India
| | - Gopal Netuveli
- Institute of Connected Communities, University of East London, Stratford Campus, London, UK
| | - Sobha Sivaprasad
- Medical Retina Department, NIHR Biomedical Research Centre, Moorfields Eye, Hospital NHS Foundation Trust, 162 City Road, London, UK and University College, London, UK
| |
Collapse
|
27
|
Rajalakshmi R, Prathiba V, Rani PK, Mohan V. Various models for diabetic retinopathy screening that can be applied to India. Indian J Ophthalmol 2021; 69:2951-2958. [PMID: 34708729 PMCID: PMC8725090 DOI: 10.4103/ijo.ijo_1145_21] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
The increased burden of diabetes in India has resulted in an increase in the complications of diabetes including sight-threatening diabetic retinopathy (DR). Visual impairment and blindness due to DR can be prevented by early detection and management of sight-threatening DR. Life-long evaluation by repetitive retinal screening of people with diabetes is an essential strategy as DR has an asymptomatic presentation. Fundus examination by trained ophthalmologists and fundus photography are established modes of screening. Various modes of opportunistic screening have been followed in India. Hospital-based screening (diabetes care/eye care) and community-based screening are the common modes. Tele-ophthalmology programs based on retinal imaging, remote interpretation, and grading of DR by trained graders/ophthalmologists have facilitated greater coverage of DR screening and enabled timely referral of those with sight-threatening DR. DR screening programs use nonmydriatic or mydriatic fundus cameras for retinal photography. Hand-held/smartphone-based fundus cameras that are portable, less expensive, and easy to use in remote places are gaining popularity. Good retinal image quality and accurate diagnosis play an important role in reducing unnecessary referrals. Recent advances like nonmydriatic ultrawide field fundus photography can be used for DR screening, though likely to be more expensive. The advent of artificial intelligence and deep learning has raised the possibility of automated detection of DR. Efforts to increase the awareness regarding DR is essential to ensure compliance to regular follow-up. Cost-effective sustainable models will ensure systematic nation-wide DR screening in the country.
Collapse
Affiliation(s)
- Ramachandran Rajalakshmi
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| | - Vijayaraghavan Prathiba
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| | - Padmaja Kumari Rani
- Vitreo-Retina Department, Smt Kanuri Santhamma Centre for Vitreoretinal Diseases, LV Prasad Eye Institute, Hyderabad, Telangana, India
| | - Viswanathan Mohan
- Department of Diabetology, Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
28
|
Correlation between optical coherence tomography angiography and multifocal electroretinogram findings in patients with diabetes mellitus. Photodiagnosis Photodyn Ther 2021; 36:102558. [PMID: 34597834 DOI: 10.1016/j.pdpdt.2021.102558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 08/12/2021] [Accepted: 09/24/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Diabetic retinopathy is characterized by microvascular, neural and glial cell damage. Optical coherence tomography angiography (OCTA) can detect subclinical microvasculopathy while multifocal electroretinography (mfERG) can detect subclinical local retinal dysfunction before onset of clinically observable retinopathy. Here, we investigated the relationship between retinal dysfunction in multifocal electroretinography and vascular changes in optical coherence tomography angiography. METHODS The study included 63 eyes of 63 diabetic patients without retinopathy (DM+DR-) and 68 eyes of 68 patients with non-proliferative diabetic retinopathy (NPDR). In addition, 64 eyes of 64 age and sex-matched subjects were included as the control group (CG). All subjects were evaluated using OCTA and mfERG. RESULTS The vascular density in the superficial and deep capillary plexus was significantly decreased in the DM+DR-group and the NPDR group compared with the CG group (except for the superficial foveal area, NPDR group vs. CG group) (p < 0.05). The vascular density of the superficial and deep parafoveal region was significantly decreased in the NPDR group compared to the DM+DR-group (p < 0.05). In circles of 2-, 5- and 10°, the amplitudes of the N1 and P1 waves were statistically significantly decreased in both the DM+DR- group and the NPDR group compared with the CG (p < 0.05). When the NPDR group was compared with the DM+DR- group, there was a statistically significant decrease in the amplitude of the N1 and P1 waves in the circles of 2- and 5° (p < 0.05). According to the correlation analysis, the amplitude and implicit times of the N1 and P1 waves showed weak-to-moderate correlation with vascular density (p < 0.05). CONCLUSIONS The decreased peaks of mfERG wave provides evidence regarding neurodegenerative effect of DM-associated hyperglycaemia. The decreased vascular density caused by hyperglycaemia was topographically associated with the retinal dysfunction and neurodegeneration.
Collapse
|
29
|
Al-Aswad LA, Elgin CY, Patel V, Popplewell D, Gopal K, Gong D, Thomas Z, Joiner D, Chu CK, Walters S, Ramachandran M, Kapoor R, Rodriguez M, Alcantara-Castillo J, Maestre GE, Lee JH, Moazami G. Real-Time Mobile Teleophthalmology for the Detection of Eye Disease in Minorities and Low Socioeconomics At-Risk Populations. Asia Pac J Ophthalmol (Phila) 2021; 10:461-472. [PMID: 34582428 PMCID: PMC8794049 DOI: 10.1097/apo.0000000000000416] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
PURPOSE To examine the benefits and feasibility of a mobile, real-time, community-based, teleophthalmology program for detecting eye diseases in the New York metro area. DESIGN Single site, nonrandomized, cross-sectional, teleophthalmologic study. METHODS Participants underwent a comprehensive evaluation in a Wi-Fi-equipped teleophthalmology mobile unit. The evaluation consisted of a basic anamnesis with a questionnaire form, brief systemic evaluations and an ophthalmologic evaluation that included visual field, intraocular pressure, pachymetry, anterior segment optical coherence tomography, posterior segment optical coherence tomography, and nonmydriatic fundus photography. The results were evaluated in real-time and follow-up calls were scheduled to complete a secondary questionnaire form. Risk factors were calculated for different types of ophthalmological referrals. RESULTS A total of 957 participants were screened. Out of 458 (48%) participants that have been referred, 305 (32%) had glaucoma, 136 (14%) had narrow-angle, 124 (13%) had cataract, 29 had (3%) diabetic retinopathy, 9 (1%) had macular degeneration, and 97 (10%) had other eye disease findings. Significant risk factors for ophthalmological referral consisted of older age, history of high blood pressure, diabetes mellitus, Hemoglobin A1c measurement of ≥6.5, and stage 2 hypertension. As for the ocular parameters, all but central corneal thickness were found to be significant, including having an intraocular pressure >21 mm Hg, vertical cup-to-disc ratio ≥0.5, visual field abnormalities, and retinal nerve fiber layer thinning. CONCLUSIONS Mobile, real-time teleophthalmology is both workable and effective in increasing access to care and identifying the most common causes of blindness and their risk factors.
Collapse
Affiliation(s)
- Lama A. Al-Aswad
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | - Cansu Yuksel Elgin
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | - Vipul Patel
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | | | | | | | | | | | | | | | | | | | - Maribel Rodriguez
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | | | | | | | | |
Collapse
|
30
|
Research on an Intelligent Lightweight-Assisted Pterygium Diagnosis Model Based on Anterior Segment Images. DISEASE MARKERS 2021; 2021:7651462. [PMID: 34367378 PMCID: PMC8342163 DOI: 10.1155/2021/7651462] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 07/16/2021] [Indexed: 12/13/2022]
Abstract
Aims The lack of primary ophthalmologists in China results in the inability of basic-level hospitals to diagnose pterygium patients. To solve this problem, an intelligent-assisted lightweight pterygium diagnosis model based on anterior segment images is proposed in this study. Methods Pterygium is a common and frequently occurring disease in ophthalmology, and fibrous tissue hyperplasia is both a diagnostic biomarker and a surgical biomarker. The model diagnosed pterygium based on biomarkers of pterygium. First, a total of 436 anterior segment images were collected; then, two intelligent-assisted lightweight pterygium diagnosis models (MobileNet 1 and MobileNet 2) based on raw data and augmented data were trained via transfer learning. The results of the lightweight models were compared with the clinical results. The classic models (AlexNet, VGG16 and ResNet18) were also used for training and testing, and their results were compared with the lightweight models. A total of 188 anterior segment images were used for testing. Sensitivity, specificity, F1-score, accuracy, kappa, area under the concentration-time curve (AUC), 95% CI, size, and parameters are the evaluation indicators in this study. Results There are 188 anterior segment images that were used for testing the five intelligent-assisted pterygium diagnosis models. The overall evaluation index for the MobileNet2 model was the best. The sensitivity, specificity, F1-score, and AUC of the MobileNet2 model for the normal anterior segment image diagnosis were 96.72%, 98.43%, 96.72%, and 0976, respectively; for the pterygium observation period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 83.7%, 90.48%, 82.54%, and 0.872, respectively; for the surgery period anterior segment image diagnosis, the sensitivity, specificity, F1-score, and AUC were 84.62%, 93.50%, 85.94%, and 0.891, respectively. The kappa value of the MobileNet2 model was 77.64%, the accuracy was 85.11%, the model size was 13.5 M, and the parameter size was 4.2 M. Conclusion This study used deep learning methods to propose a three-category intelligent lightweight-assisted pterygium diagnosis model. The developed model can be used to screen patients for pterygium problems initially, provide reasonable suggestions, and provide timely referrals. It can help primary doctors improve pterygium diagnoses, confer social benefits, and lay the foundation for future models to be embedded in mobile devices.
Collapse
|
31
|
Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent? Front Med (Lausanne) 2021; 8:695217. [PMID: 34368192 PMCID: PMC8333706 DOI: 10.3389/fmed.2021.695217] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 06/22/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose: The method of diagnosing diabetic retinopathy (DR) through artificial intelligence (AI)-based systems has been commercially available since 2018. This introduces new ethical challenges with regard to obtaining informed consent from patients. The purpose of this work is to develop a checklist of items to be disclosed when diagnosing DR with AI systems in a primary care setting. Methods: Two systematic literature searches were conducted in PubMed and Web of Science databases: a narrow search focusing on DR and a broad search on general issues of AI-based diagnosis. An ethics content analysis was conducted inductively to extract two features of included publications: (1) novel information content for AI-aided diagnosis and (2) the ethical justification for its disclosure. Results: The narrow search yielded n = 537 records of which n = 4 met the inclusion criteria. The information process was scarcely addressed for primary care setting. The broad search yielded n = 60 records of which n = 11 were included. In total, eight novel elements were identified to be included in the information process for ethical reasons, all of which stem from the technical specifics of medical AI. Conclusions: Implications for the general practitioner are two-fold: First, doctors need to be better informed about the ethical implications of novel technologies and must understand them to properly inform patients. Second, patient's overconfidence or fears can be countered by communicating the risks, limitations, and potential benefits of diagnostic AI systems. If patients accept and are aware of the limitations of AI-aided diagnosis, they increase their chances of being diagnosed and treated in time.
Collapse
|
32
|
Deep learning-based automated detection for diabetic retinopathy and diabetic macular oedema in retinal fundus photographs. Eye (Lond) 2021; 36:1433-1441. [PMID: 34211137 DOI: 10.1038/s41433-021-01552-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 03/24/2021] [Accepted: 04/13/2021] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES To present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images. METHODS A total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm's performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model's performance, respectively. Further, the time budget of training/inference versus model performance was analyzed. RESULTS On our primary test dataset, the model achieved an 0.992 (95% CI, 0.989-0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950-0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992-0.996) with a 0.930 (95% CI, 0.919-0.941) sensitivity and 0.971 (95% CI, 0.965-0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985. CONCLUSION This study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.
Collapse
|
33
|
Zheng B, Jiang Q, Lu B, He K, Wu MN, Hao XL, Zhou HX, Zhu SJ, Yang WH. Five-Category Intelligent Auxiliary Diagnosis Model of Common Fundus Diseases Based on Fundus Images. Transl Vis Sci Technol 2021; 10:20. [PMID: 34132760 PMCID: PMC8212443 DOI: 10.1167/tvst.10.7.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The discrepancy of the number between ophthalmologists and patients in China is large. Retinal vein occlusion (RVO), high myopia, glaucoma, and diabetic retinopathy (DR) are common fundus diseases. Therefore, in this study, a five-category intelligent auxiliary diagnosis model for common fundus diseases is proposed; the model's area of focus is marked. Methods A total of 2000 fundus images were collected; 3 different 5-category intelligent auxiliary diagnosis models for common fundus diseases were trained via different transfer learning and image preprocessing techniques. A total of 1134 fundus images were used for testing. The clinical diagnostic results were compared with the diagnostic results. The main evaluation indicators included sensitivity, specificity, F1-score, area under the concentration-time curve (AUC), 95% confidence interval (CI), kappa, and accuracy. The interpretation methods were used to obtain the model's area of focus in the fundus image. Results The accuracy rates of the 3 intelligent auxiliary diagnosis models on the 1134 fundus images were all above 90%, the kappa values were all above 88%, the diagnosis consistency was good, and the AUC approached 0.90. For the 4 common fundus diseases, the best results of sensitivity, specificity, and F1-scores of the 3 models were 88.27%, 97.12%, and 84.02%; 89.94%, 99.52%, and 93.90%; 95.24%, 96.43%, and 85.11%; and 88.24%, 98.21%, and 89.55%, respectively. Conclusions This study designed a five-category intelligent auxiliary diagnosis model for common fundus diseases. It can be used to obtain the diagnostic category of fundus images and the model's area of focus. Translational Relevance This study will help the primary doctors to provide effective services to all ophthalmologic patients.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Qin Jiang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Bing Lu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China.,College of Computer and Information, Hehai University, Nanjing, Jiangsu, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou, Zhejiang, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, Zhejiang Province, China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| |
Collapse
|
34
|
Wintergerst MWM, Bejan V, Hartmann V, Schnorrenberg M, Bleckwenn M, Weckbecker K, Finger RP. Telemedical Diabetic Retinopathy Screening in a Primary Care Setting: Quality of Retinal Photographs and Accuracy of Automated Image Analysis. Ophthalmic Epidemiol 2021; 29:286-295. [PMID: 34151725 DOI: 10.1080/09286586.2021.1939886] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Background: Screening for diabetic eye disease (DED) and general diabetes care is often separate, which leads to delays and low adherence to DED screening recommendations. Thus, we assessed the feasibility, achieved image quality, and possible barriers of telemedical DED screening in a point-of-care general practice setting and the accuracy of an automated algorithm for detection of DED.Methods: Patients with diabetes were recruited at general practices. Retinal images were acquired using a non-mydriatic camera (CenterVue, Italy) by medical assistants. Images were quality assessed and double graded by two graders. All images were also graded automatically using a commercially available artificial intelligence (AI) algorithm (EyeArt version 2.1.0, Eyenuk Inc.).Results: A total of 75 patients (147 eyes; mean age 69 years, 96% type 2 diabetes) were included. Most of the patients (51; 68%) preferred DED screening at the general practice, but only twenty-four (32%) were willing to pay for this service. Images of 63 patients (84%) were determined to be evaluable, and DED was diagnosed in 6 patients (8.0%). The algorithm's positive/negative predictive values (95% confidence interval) were 0.80 (0.28-0.99)/1.00 (0.92-1.00) and 0.75 (0.19-0.99)/0.98 (0.88-1.00) for detection of any DED and referral-warranted DED, respectively.Overall, the number of referrals was 18 (24%) for manual telemedical assessment and 31 (41%) for the artificial intelligence (AI) algorithm, resulting in a relative increase of referrals by 72% when using AI.Conclusions: Our study shows that achieved overall image quality in a telemedical GP-based DED screening was sufficient and that it would be accepted by medical assistants and patients in most cases. However, good image quality and integration into existing workflow remain challenging. Based on these findings, a larger-scale implementation study is warranted.
Collapse
Affiliation(s)
| | - Veronica Bejan
- Department of Ophthalmology, University Hospital Bonn, Bonn, Germany
| | - Vera Hartmann
- Department of Ophthalmology, University Hospital Bonn, Bonn, Germany
| | - Marina Schnorrenberg
- Institute of General Practice and Interprofessional Care, Faculty of Health/Department of Medicine, University Witten/Herdecke, Witten, Germany
| | - Markus Bleckwenn
- Department of General Practice, Medical Faculty, University of Leipzig, Leipzig, Germany
| | - Klaus Weckbecker
- Institute of General Practice and Interprofessional Care, Faculty of Health/Department of Medicine, University Witten/Herdecke, Witten, Germany
| | - Robert P Finger
- Department of Ophthalmology, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
35
|
Diabetic retinopathy and diabetic macular oedema pathways and management: UK Consensus Working Group. Eye (Lond) 2021; 34:1-51. [PMID: 32504038 DOI: 10.1038/s41433-020-0961-6] [Citation(s) in RCA: 82] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
The management of diabetic retinopathy (DR) has evolved considerably over the past decade, with the availability of new technologies (diagnostic and therapeutic). As such, the existing Royal College of Ophthalmologists DR Guidelines (2013) are outdated, and to the best of our knowledge are not under revision at present. Furthermore, there are no other UK guidelines covering all available treatments, and there seems to be significant variation around the UK in the management of diabetic macular oedema (DMO). This manuscript provides a summary of reviews the pathogenesis of DR and DMO, including role of vascular endothelial growth factor (VEGF) and non-VEGF cytokines, clinical grading/classification of DMO vis a vis current terminology (of centre-involving [CI-DMO], or non-centre involving [nCI-DMO], systemic risks and their management). The excellent UK DR Screening (DRS) service has continued to evolve and remains world-leading. However, challenges remain, as there are significant variations in equipment used, and reproducible standards of DMO screening nationally. The interphase between DRS and the hospital eye service can only be strengthened with further improvements. The role of modern technology including optical coherence tomography (OCT) and wide-field imaging, and working practices including virtual clinics and their potential in increasing clinic capacity and improving patient experiences and outcomes are discussed. Similarly, potential roles of home monitoring in diabetic eyes in the future are explored. The role of pharmacological (intravitreal injections [IVT] of anti-VEGFs and steroids) and laser therapies are summarised. Generally, IVT anti-VEGF are offered as first line pharmacologic therapy. As requirements of diabetic patients in particular patient groups may vary, including pregnant women, children, and persons with learning difficulties, it is important that DR management is personalised in such particular patient groups. First choice therapy needs to be individualised in these cases and may be intravitreal steroids rather than the standard choice of anti-VEGF agents. Some of these, but not all, are discussed in this document.
Collapse
|
36
|
Dutt S, Sivaraman A, Savoy F, Rajalakshmi R. Insights into the growing popularity of artificial intelligence in ophthalmology. Indian J Ophthalmol 2021; 68:1339-1346. [PMID: 32587159 PMCID: PMC7574057 DOI: 10.4103/ijo.ijo_1754_19] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Artificial intelligence (AI) in healthcare is the use of computer-algorithms in analyzing complex medical data to detect associations and provide diagnostic support outputs. AI and deep learning (DL) find obvious applications in fields like ophthalmology wherein huge amount of image-based data need to be analyzed; however, the outcomes related to image recognition are reasonably well-defined. AI and DL have found important roles in ophthalmology in early screening and detection of conditions such as diabetic retinopathy (DR), age-related macular degeneration (ARMD), retinopathy of prematurity (ROP), glaucoma, and other ocular disorders, being successful inroads as far as early screening and diagnosis are concerned and appear promising with advantages of high-screening accuracy, consistency, and scalability. AI algorithms need equally skilled manpower, trained optometrists/ophthalmologists (annotators) to provide accurate ground truth for training the images. The basis of diagnoses made by AI algorithms is mechanical, and some amount of human intervention is necessary for further interpretations. This review was conducted after tracing the history of AI in ophthalmology across multiple research databases and aims to summarise the journey of AI in ophthalmology so far, making a close observation of most of the crucial studies conducted. This article further aims to highlight the potential impact of AI in ophthalmology, the pitfalls, and how to optimally use it to the maximum benefits of the ophthalmologists, the healthcare systems and the patients, alike.
Collapse
Affiliation(s)
- Sreetama Dutt
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Anand Sivaraman
- Department of Research & Development, Remidio Innovative Solutions, Bengaluru, Karnataka, India
| | - Florian Savoy
- Department of Artificial Intelligence, Medios Technologies, Singapore
| | - Ramachandran Rajalakshmi
- Department of Ophthalmology, Dr. Mohan's Diabetes Specialities Centre Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| |
Collapse
|
37
|
Reguant R, Brunak S, Saha S. Understanding inherent image features in CNN-based assessment of diabetic retinopathy. Sci Rep 2021; 11:9704. [PMID: 33958686 PMCID: PMC8102512 DOI: 10.1038/s41598-021-89225-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/20/2021] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness and affects millions of people throughout the world. Early detection and timely checkups are key to reduce the risk of blindness. Automated grading of DR is a cost-effective way to ensure early detection and timely checkups. Deep learning or more specifically convolutional neural network (CNN)-based methods produce state-of-the-art performance in DR detection. Whilst CNN based methods have been proposed, no comparisons have been done between the extracted image features and their clinical relevance. Here we first adopt a CNN visualization strategy to discover the inherent image features involved in the CNN's decision-making process. Then, we critically analyze those features with respect to commonly known pathologies namely microaneurysms, hemorrhages and exudates, and other ocular components. We also critically analyze different CNNs by considering what image features they pick up during learning to predict and justify their clinical relevance. The experiments are executed on publicly available fundus datasets (EyePACS and DIARETDB1) achieving an accuracy of 89 ~ 95% with AUC, sensitivity and specificity of respectively 95 ~ 98%, 74 ~ 86%, and 93 ~ 97%, for disease level grading of DR. Whilst different CNNs produce consistent classification results, the rate of picked-up image features disagreement between models could be as high as 70%.
Collapse
Affiliation(s)
- Roc Reguant
- Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, 2200, Copenhagen N, Denmark.
- Australian E-Health Research Centre, CSIRO, Perth, Australia.
| | - Søren Brunak
- Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, 2200, Copenhagen N, Denmark
| | - Sajib Saha
- Australian E-Health Research Centre, CSIRO, Perth, Australia
| |
Collapse
|
38
|
Gao Q, Amason J, Cousins S, Pajic M, Hadziahmetovic M. Automated Identification of Referable Retinal Pathology in Teleophthalmology Setting. Transl Vis Sci Technol 2021; 10:30. [PMID: 34036304 PMCID: PMC8161696 DOI: 10.1167/tvst.10.6.30] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 01/31/2021] [Indexed: 02/06/2023] Open
Abstract
Purpose This study aims to meet a growing need for a fully automated, learning-based interpretation tool for retinal images obtained remotely (e.g. teleophthalmology) through different imaging modalities that may include imperfect (uninterpretable) images. Methods A retrospective study of 1148 optical coherence tomography (OCT) and color fundus photography (CFP) retinal images obtained using Topcon's Maestro care unit on 647 patients with diabetes. To identify retinal pathology, a Convolutional Neural Network (CNN) with dual-modal inputs (i.e. CFP and OCT images) was developed. We developed a novel alternate gradient descent algorithm to train the CNN, which allows for the use of uninterpretable CFP/OCT images (i.e. ungradable images that do not contain sufficient image biomarkers for the reviewer to conclude absence or presence of retinal pathology). Specifically, a 9:1 ratio to split the training and testing dataset was used for training and validating the CNN. Paired CFP/OCT inputs (obtained from a single eye of a patient) were grouped as retinal pathology negative (RPN; 924 images) in the absence of retinal pathology in both imaging modalities, or if one of the imaging modalities was uninterpretable and the other without retinal pathology. If any imaging modality exhibited referable retinal pathology, the corresponding CFP/OCT inputs were deemed retinal pathology positive (RPP; 224 images) if any imaging modality exhibited referable retinal pathology. Results Our approach achieved 88.60% (95% confidence interval [CI] = 82.76% to 94.43%) accuracy in identifying pathology, along with the false negative rate (FNR) of 12.28% (95% CI = 6.26% to 18.31%), recall (sensitivity) of 87.72% (95% CI = 81.69% to 93.74%), specificity of 89.47% (95% CI = 83.84% to 95.11%), and area under the curve of receiver operating characteristic (AUC-ROC) was 92.74% (95% CI = 87.71% to 97.76%). Conclusions Our model can be successfully deployed in clinical practice to facilitate automated remote retinal pathology identification. Translational Relevance A fully automated tool for early diagnosis of retinal pathology might allow for earlier treatment and improved visual outcomes.
Collapse
Affiliation(s)
- Qitong Gao
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Joshua Amason
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Scott Cousins
- Department of Ophthalmology, Duke University, Durham, NC, USA
| | - Miroslav Pajic
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
- Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
39
|
Wang Y, Yu M, Hu B, Jin X, Li Y, Zhang X, Zhang Y, Gong D, Wu C, Zhang B, Yang J, Li B, Yuan M, Mo B, Wei Q, Zhao J, Ding D, Yang J, Li X, Yu W, Chen Y. Deep learning-based detection and stage grading for optimising diagnosis of diabetic retinopathy. Diabetes Metab Res Rev 2021; 37:e3445. [PMID: 33713564 DOI: 10.1002/dmrr.3445] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 11/07/2022]
Abstract
AIMS To establish an automated method for identifying referable diabetic retinopathy (DR), defined as moderate nonproliferative DR and above, using deep learning-based lesion detection and stage grading. MATERIALS AND METHODS A set of 12,252 eligible fundus images of diabetic patients were manually annotated by 45 licenced ophthalmologists and were randomly split into training, validation, and internal test sets (ratio of 7:1:2). Another set of 565 eligible consecutive clinical fundus images was established as an external test set. For automated referable DR identification, four deep learning models were programmed based on whether two factors were included: DR-related lesions and DR stages. Sensitivity, specificity and the area under the receiver operating characteristic curve (AUC) were reported for referable DR identification, while precision and recall were reported for lesion detection. RESULTS Adding lesion information to the five-stage grading model improved the AUC (0.943 vs. 0.938), sensitivity (90.6% vs. 90.5%) and specificity (80.7% vs. 78.5%) of the model for identifying referable DR in the internal test set. Adding stage information to the lesion-based model increased the AUC (0.943 vs. 0.936) and sensitivity (90.6% vs. 76.7%) of the model for identifying referable DR in the internal test set. Similar trends were also seen in the external test set. DR lesion types with high precision results were preretinal haemorrhage, hard exudate, vitreous haemorrhage, neovascularisation, cotton wool spots and fibrous proliferation. CONCLUSIONS The herein described automated model employed DR lesions and stage information to identify referable DR and displayed better diagnostic value than models built without this information.
Collapse
Affiliation(s)
- Yuelin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Miao Yu
- Department of Endocrinology, Key Laboratory of Endocrinology, National Health Commission, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Bojie Hu
- Department of Ophthalmology, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Xuemin Jin
- Department of Ophthalmology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yibin Li
- Department of Ophthalmology, Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiao Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Yongpeng Zhang
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Di Gong
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Chan Wu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bilei Zhang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Jingyuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Mingzhen Yuan
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Bin Mo
- Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Qijie Wei
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jianchun Zhao
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Dayong Ding
- Vistel AI Lab, Visionary Intelligence Ltd., Beijing, China
| | - Jingyun Yang
- Department of Neurological Sciences, Rush Alzheimer's Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Xirong Li
- Key Lab of Data Engineering and Knowledge Engineering, Renmin University of China, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Lab of Ocular Fundus Disease, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
40
|
Raman R, Ramasamy K, Rajalakshmi R, Sivaprasad S, Natarajan S. Diabetic retinopathy screening guidelines in India: All India Ophthalmological Society diabetic retinopathy task force and Vitreoretinal Society of India Consensus Statement. Indian J Ophthalmol 2021; 69:678-688. [PMID: 33269742 PMCID: PMC7942107 DOI: 10.4103/ijo.ijo_667_20] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/13/2020] [Accepted: 07/14/2020] [Indexed: 12/15/2022] Open
Abstract
Diabetic retinopathy (DR) is an emerging preventable cause of blindness in India. All India Ophthalmology Society (AIOS) and Vitreo-Retinal Society of India (VRSI) have initiated several measures to improve of DR screening in India. This article is a consensus statement of the AIOS DR task force and VRSI on practical guidelines of DR screening in India. Although there are regional variations in the prevalence of diabetes in India at present, all the States in India should screen their population for diabetes and its complications. The purpose of DR screening is to identify people with sight-threatening DR (STDR) so that they are treated promptly to prevent blindness. This statement provides strategies for the identification of people with diabetes for DR screening, recommends screening intervals in people with diabetes with and without DR, and describes screening models that are feasible in India. The logistics of DR screening emphasizes the need for dynamic referral pathways with feedback mechanisms. It provides the clinical standards required for DR screening and treatment of STDR and addresses the governance and quality assurance (QA) standards for DR screening in Indian settings. Other aspects incorporate education and training, recommendations on Information technology (IT) infrastructure, potential use of artificial intelligence for grading, data capture, and requirements for maintenance of a DR registry. Finally, the recommendations include public awareness and the need to work with diabetologists to control the risk factors so as to have a long-term impact on prevention of diabetes blindness in India.
Collapse
Affiliation(s)
- Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Chennai, Tamil Nadu, India
| | - Kim Ramasamy
- Aravind Eye Hospital, Madurai, Tamil Nadu, India
| | - Ramachandran Rajalakshmi
- Dr. Mohan's Diabetes Specialities Centre and Madras Diabetes Research Foundation, Chennai, Tamil Nadu, India
| | | | - S Natarajan
- Aditya Jyot Eye Hospital Pvt. Ltd., Mumbai, Maharashtra, India
| |
Collapse
|
41
|
El-Kenawy ESM, Mirjalili S, Ibrahim A, Alrahmawy M, El-Said M, Zaki RM, Eid MM. Advanced Meta-Heuristics, Convolutional Neural Networks, and Feature Selectors for Efficient COVID-19 X-Ray Chest Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:36019-36037. [PMID: 34812381 PMCID: PMC8545230 DOI: 10.1109/access.2021.3061058] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 02/16/2021] [Indexed: 05/09/2023]
Abstract
The chest X-ray is considered a significant clinical utility for basic examination and diagnosis. The human lung area can be affected by various infections, such as bacteria and viruses, leading to pneumonia. Efficient and reliable classification method facilities the diagnosis of such infections. Deep transfer learning has been introduced for pneumonia detection from chest X-rays in different models. However, there is still a need for further improvements in the feature extraction and advanced classification stages. This paper proposes a classification method with two stages to classify different cases from the chest X-ray images based on a proposed Advanced Squirrel Search Optimization Algorithm (ASSOA). The first stage is the feature learning and extraction processes based on a Convolutional Neural Network (CNN) model named ResNet-50 with image augmentation and dropout processes. The ASSOA algorithm is then applied to the extracted features for the feature selection process. Finally, the Multi-layer Perceptron (MLP) Neural Network's connection weights are optimized by the proposed ASSOA algorithm (using the selected features) to classify input cases. A Kaggle chest X-ray images (Pneumonia) dataset consists of 5,863 X-rays is employed in the experiments. The proposed ASSOA algorithm is compared with the basic Squirrel Search (SS) optimization algorithm, Grey Wolf Optimizer (GWO), and Genetic Algorithm (GA) for feature selection to validate its efficiency. The proposed (ASSOA + MLP) is also compared with other classifiers, based on (SS + MLP), (GWO + MLP), and (GA + MLP), in performance metrics. The proposed (ASSOA + MLP) algorithm achieved a classification mean accuracy of (99.26%). The ASSOA + MLP algorithm also achieved a classification mean accuracy of (99.7%) for a chest X-ray COVID-19 dataset tested from GitHub. The results and statistical tests demonstrate the high effectiveness of the proposed method in determining the infected cases.
Collapse
Affiliation(s)
- El-Sayed M. El-Kenawy
- Department of Communications and ElectronicsDelta Higher Institute of Engineering and Technology (DHIET)Mansoura35111Egypt
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and OptimizationTorrens University AustraliaFortitude ValleyQLD4006Australia
- Yonsei Frontier LabYonsei UniversitySeoul03722South Korea
| | - Abdelhameed Ibrahim
- Computer Engineering and Control Systems DepartmentFaculty of EngineeringMansoura UniversityMansoura35516Egypt
| | - Mohammed Alrahmawy
- Department of Computer ScienceFaculty of Computers and InformationMansoura UniversityMansoura35516Egypt
| | - M. El-Said
- Electrical Engineering DepartmentFaculty of EngineeringMansoura UniversityMansoura35516Egypt
- Delta Higher Institute of Engineering and Technology (DHIET)Mansoura35111Egypt
| | - Rokaia M. Zaki
- Department of Communications and ElectronicsDelta Higher Institute of Engineering and Technology (DHIET)Mansoura35111Egypt
- Department of Electrical EngineeringShoubra Faculty of EngineeringBenha UniversityBenha11629Egypt
| | - Marwa Metwally Eid
- Department of Communications and ElectronicsDelta Higher Institute of Engineering and Technology (DHIET)Mansoura35111Egypt
| |
Collapse
|
42
|
Tadavarthi Y, Vey B, Krupinski E, Prater A, Gichoya J, Safdar N, Trivedi H. The State of Radiology AI: Considerations for Purchase Decisions and Current Market Offerings. Radiol Artif Intell 2020; 2:e200004. [PMID: 33937846 DOI: 10.1148/ryai.2020200004] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 06/16/2020] [Accepted: 06/25/2020] [Indexed: 01/02/2023]
Abstract
Purpose To provide an overview of important factors to consider when purchasing radiology artificial intelligence (AI) software and current software offerings by type, subspecialty, and modality. Materials and Methods Important factors for consideration when purchasing AI software, including key decision makers, data ownership and privacy, cost structures, performance indicators, and potential return on investment are described. For the market overview, a list of radiology AI companies was aggregated from the Radiological Society of North America and the Society for Imaging Informatics in Medicine conferences (November 2016-June 2019), then narrowed to companies using deep learning for imaging analysis and diagnosis. Software created for image enhancement, reporting, or workflow management was excluded. Software was categorized by task (repetitive, quantitative, explorative, and diagnostic), modality, and subspecialty. Results A total of 119 software offerings from 55 companies were identified. There were 46 algorithms that currently have Food and Drug Administration and/or Conformité Européenne approval (as of November 2019). Of the 119 offerings, distribution of software targets was 34 of 70 (49%), 21 of 70 (30%), 14 of 70 (20%), and one of 70 (1%) for diagnostic, quantitative, repetitive, and explorative tasks, respectively. A plurality of companies are focused on nodule detection at chest CT and two-dimensional mammography. There is very little activity in certain subspecialties, including pediatrics and nuclear medicine. A comprehensive table is available on the website hitilab.org/pages/ai-companies. Conclusion The radiology AI marketplace is rapidly maturing, with an increase in product offerings. Radiologists and practice administrators should educate themselves on current product offerings and important factors to consider before purchase and implementation.© RSNA, 2020See also the invited commentary by Sala and Ursprung in this issue.
Collapse
Affiliation(s)
- Yasasvi Tadavarthi
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| | - Brianna Vey
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| | - Elizabeth Krupinski
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| | - Adam Prater
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| | - Judy Gichoya
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| | - Nabile Safdar
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| | - Hari Trivedi
- Department of Radiology, Medical College of Georgia at Augusta University, 1120 15th St, Augusta, GA 30912 (Y.T.); and Department of Radiology, Emory University, Atlanta, Ga (B.V., E.K., A.P., J.G., N.S., H.T.)
| |
Collapse
|
43
|
Zéboulon P, Debellemanière G, Bouvet M, Gatinel D. Corneal Topography Raw Data Classification Using a Convolutional Neural Network. Am J Ophthalmol 2020; 219:33-39. [PMID: 32533948 DOI: 10.1016/j.ajo.2020.06.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 05/18/2020] [Accepted: 06/03/2020] [Indexed: 02/07/2023]
Abstract
PURPOSE We investigated the efficiency of a convolutional neural network applied to corneal topography raw data to classify examinations of 3 categories: normal, keratoconus (KC), and history of refractive surgery (RS). DESIGN Retrospective machine-learning experimental study. METHODS A total of 3,000 Orbscan examinations (1,000 of each class) of different patients of our institution were selected for model training and validation. One hundred examinations of each class were randomly assigned to the test set. For each examination, the raw numerical data from "elevation against the anterior best fit sphere (BFS)," "elevation against the posterior BFS" "axial anterior curvature," and "pachymetry" maps were used. Each map was a square matrix of 2,500 values. The 4 maps were stacked and used as if they were 4 channels of a single image.A convolutional neural network was built and trained on the training set. Classification accuracy and class wise sensitivity and specificity were calculated for the validation set. RESULTS Overall classification accuracy of the validation set (n = 300) was 99.3% (98.3%-100%). Sensitivity and specificity were, respectively, 100% and 100% for KC, 100% and 99% (94.9%-100%) for normal examinations, and 98% (97.4%-100%) and 100% for RS examinations. CONCLUSION Using combined corneal topography raw data with a convolutional neural network is an effective way to classify examinations and probably the most thorough way to automatically analyze corneal topography. It should be considered for other routine tasks performed on corneal topography, such as refractive surgery screening.
Collapse
|
44
|
Rêgo S, Dutra-Medeiros M, Soares F, Monteiro-Soares M. Screening for Diabetic Retinopathy Using an Automated Diagnostic System Based on Deep Learning: Diagnostic Accuracy Assessment. Ophthalmologica 2020; 244:250-257. [PMID: 33120397 DOI: 10.1159/000512638] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 10/12/2020] [Indexed: 11/19/2022]
Abstract
PURPOSE To evaluate the diagnostic accuracy of a diagnostic system software for the automated screening of diabetic retinopathy (DR) on digital colour fundus photographs, the 2019 Convolutional Neural Network (CNN) model with Inception-V3. METHODS In this cross-sectional study, 295 fundus images were analysed by the CNN model and compared to a panel of ophthalmologists. Images were obtained from a dataset acquired within a screening programme. Diagnostic accuracy measures and respective 95% CI were calculated. RESULTS The sensitivity and specificity of the CNN model in diagnosing referable DR was 81% (95% CI 66-90%) and 97% (95% CI 95-99%), respectively. Positive predictive value was 86% (95% CI 72-94%) and negative predictive value 96% (95% CI 93-98%). The positive likelihood ratio was 33 (95% CI 15-75) and the negative was 0.20 (95% CI 0.11-0.35). Its clinical impact is demonstrated by the change observed in the pre-test probability of referable DR (assuming a prevalence of 16%) to a post-test probability for a positive test result of 86% and for a negative test result of 4%. CONCLUSION A CNN model negative test result safely excludes DR, and its use may significantly reduce the burden of ophthalmologists at reading centres.
Collapse
Affiliation(s)
- Sílvia Rêgo
- R&D Department, Fraunhofer Portugal AICOS, Porto, Portugal, .,Faculty of Medicine of the University of Porto, Porto, Portugal,
| | - Marco Dutra-Medeiros
- Department of Surgical Retina, Lisbon Central Hospital, Lisbon, Portugal.,Portuguese Retina Institute, Lisbon, Portugal.,Chronic Diseases Research Center, NOVA Medical School, NOVA University of Lisbon, Lisbon, Portugal.,Protective Association of Diabetics of Portugal, Lisbon, Portugal
| | - Filipe Soares
- R&D Department, Fraunhofer Portugal AICOS, Porto, Portugal
| | - Matilde Monteiro-Soares
- MEDCIDS: Department of Community Medicine, Health Information and Decision, Faculty of Medicine of the University of Porto, Porto, Portugal.,CINTESIS: Center for Health Technology and Services Research, Faculty of Medicine of the University of Porto, Porto, Portugal
| |
Collapse
|
45
|
Hao Z, Cui S, Zhu Y, Shao H, Huang X, Jiang X, Xu R, Chang B, Li H. Application of non-mydriatic fundus examination and artificial intelligence to promote the screening of diabetic retinopathy in the endocrine clinic: an observational study of T2DM patients in Tianjin, China. Ther Adv Chronic Dis 2020; 11:2040622320942415. [PMID: 32973990 PMCID: PMC7491217 DOI: 10.1177/2040622320942415] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Accepted: 06/19/2020] [Indexed: 01/19/2023] Open
Abstract
Background We aimed to determine the role of non-mydriatic fundus examination and artificial intelligence (AI) in screening diabetic retinopathy (DR) in patients with diabetes in the Metabolic Disease Management Center (MMC) in Tianjin, China. Methods Adult patients with type 2 diabetes mellitus who were first treated by MMC in Tianjin First Central Hospital and Tianjin 4th Center Hospital were divided into two groups according to the time that MMC was equipped with the non-mydriatic ophthalmoscope and AI system and could complete fundus examination independently (the former was the control group, the latter was the observation group). The observation indices were as follows: the incidence of DR, the fundus screening rate of the two groups, and fundus screening of diabetic patients with different course of disease. Results A total of 5039 patients were enrolled in this study. The incidence rate of DR was 18.6%, 29.8%, and 49.6% in patients with diabetes duration of ⩽1 year, 1-5 years, and >5 years, respectively. The screening rate of fundus in the observation group was significantly higher compared with the control group (81.3% versus 28.4%, χ 2 = 1430.918, p < 0.001). The DR screening rate of the observation group was also significantly higher compared with the control group in patients with diabetes duration of ⩽1 year (77.3% versus 20.6%; χ 2 = 797.534, p < 0.001), 1-5 years (82.5% versus 31.0%; χ 2 = 197.124, p < 0.001) and ⩾5 years (86.9% versus 37.1%; χ2 = 475.609, p < 0.001). Conclusions In the case of limited medical resources, MMC can carry out one-stop examination, treatment, and management of DR through non-mydratic fundus examination and AI assistance, thus incorporating the DR screening process into the endocrine clinic, so as to facilitate early diagnosis.
Collapse
Affiliation(s)
- Zhaohu Hao
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Shanshan Cui
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Yanjuan Zhu
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Hailin Shao
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, The 4th Central Hospital Affiliated to Nankai University, The 4th Center Clinical College of Tianjin Medical University, Tianjin, China
| | - Xiao Huang
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin, China
| | - Xia Jiang
- Department of Endocrinology, Tianjin First Central Hospital, The First Center Clinical College of Tianjin Medical University, Tianjin, China
| | - Rong Xu
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, The 4th Central Hospital Affiliated to Nankai University, The 4th Center Clinical College of Tianjin Medical University, Tianjin, China
| | - Baocheng Chang
- NHC Key Laboratory of Hormones and Development (Tianjin Medical University), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Medical University Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin 300134, China
| | - Huanming Li
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, The 4th Central Hospital Affiliated to Nankai University, The 4th Center Clinical College of Tianjin Medical University, No. 1 Zhongshan Road, Tianjin 300140, China
| |
Collapse
|
46
|
Tseng VS, Chen CL, Liang CM, Tai MC, Liu JT, Wu PY, Deng MS, Lee YW, Huang TY, Chen YH. Leveraging Multimodal Deep Learning Architecture with Retina Lesion Information to Detect Diabetic Retinopathy. Transl Vis Sci Technol 2020; 9:41. [PMID: 32855845 PMCID: PMC7424907 DOI: 10.1167/tvst.9.2.41] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 05/28/2020] [Indexed: 01/27/2023] Open
Abstract
Purpose To improve disease severity classification from fundus images using a hybrid architecture with symptom awareness for diabetic retinopathy (DR). Methods We used 26,699 fundus images of 17,834 diabetic patients from three Taiwanese hospitals collected in 2007 to 2018 for DR severity classification. Thirty-seven ophthalmologists verified the images using lesion annotation and severity classification as the ground truth. Two deep learning fusion architectures were proposed: late fusion, which combines lesion and severity classification models in parallel using a postprocessing procedure, and two-stage early fusion, which combines lesion detection and classification models sequentially and mimics the decision-making process of ophthalmologists. Messidor-2 was used with 1748 images to evaluate and benchmark the performance of the architecture. The primary evaluation metrics were classification accuracy, weighted κ statistic, and area under the receiver operating characteristic curve (AUC). Results For hospital data, a hybrid architecture achieved a good detection rate, with accuracy and weighted κ of 84.29% and 84.01%, respectively, for five-class DR grading. It also classified the images of early stage DR more accurately than conventional algorithms. The Messidor-2 model achieved an AUC of 97.09% in referral DR detection compared to AUC of 85% to 99% for state-of-the-art algorithms that learned from a larger database. Conclusions Our hybrid architectures strengthened and extracted characteristics from DR images, while improving the performance of DR grading, thereby increasing the robustness and confidence of the architectures for general use. Translational Relevance The proposed fusion architectures can enable faster and more accurate diagnosis of various DR pathologies than that obtained in current manual clinical practice.
Collapse
Affiliation(s)
- Vincent S Tseng
- Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan.,Institute of Data Science and Engineering, National Chiao Tung University, Hsinchu, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chang-Min Liang
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Ming-Cheng Tai
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ming-Shan Deng
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Teng-Yi Huang
- Computational Intelligence Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
47
|
Accelerating ophthalmic artificial intelligence research: the role of an open access data repository. Curr Opin Ophthalmol 2020; 31:337-350. [PMID: 32740059 DOI: 10.1097/icu.0000000000000678] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
PURPOSE OF REVIEW Artificial intelligence has already provided multiple clinically relevant applications in ophthalmology. Yet, the explosion of nonstandardized reporting of high-performing algorithms are rendered useless without robust and streamlined implementation guidelines. The development of protocols and checklists will accelerate the translation of research publications to impact on patient care. RECENT FINDINGS Beyond technological scepticism, we lack uniformity in analysing algorithmic performance generalizability, and benchmarking impacts across clinical settings. No regulatory guardrails have been set to minimize bias or optimize interpretability; no consensus clinical acceptability thresholds or systematized postdeployment monitoring has been set. Moreover, stakeholders with misaligned incentives deepen the landscape complexity especially when it comes to the requisite data integration and harmonization to advance the field. Therefore, despite increasing algorithmic accuracy and commoditization, the infamous 'implementation gap' persists. Open clinical data repositories have been shown to rapidly accelerate research, minimize redundancies and disseminate the expertise and knowledge required to overcome existing barriers. Drawing upon the longstanding success of existing governance frameworks and robust data use and sharing agreements, the ophthalmic community has tremendous opportunity in ushering artificial intelligence into medicine. By collaboratively building a powerful resource of open, anonymized multimodal ophthalmic data, the next generation of clinicians can advance data-driven eye care in unprecedented ways. SUMMARY This piece demonstrates that with readily accessible data, immense progress can be achieved clinically and methodologically to realize artificial intelligence's impact on clinical care. Exponentially progressive network effects can be seen by consolidating, curating and distributing data amongst both clinicians and data scientists.
Collapse
|
48
|
Artificial Neural Networks Model for Predicting Type 2 Diabetes Mellitus Based on VDR Gene FokI Polymorphism, Lipid Profile and Demographic Data. BIOLOGY 2020; 9:biology9080222. [PMID: 32823649 PMCID: PMC7465516 DOI: 10.3390/biology9080222] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2020] [Revised: 08/04/2020] [Accepted: 08/10/2020] [Indexed: 01/06/2023]
Abstract
Type 2 diabetes mellitus (T2DM) is a multifactorial disease associated with many genetic polymorphisms; among them is the FokI polymorphism in the vitamin D receptor (VDR) gene. In this case-control study, samples from 82 T2DM patients and 82 healthy controls were examined to investigate the association of the FokI polymorphism and lipid profile with T2DM in the Jordanian population. DNA was extracted from blood and genotyped for the FokI polymorphism by polymerase chain reaction (PCR) and DNA sequencing. Lipid profile and fasting blood sugar were also measured. There were significant differences in high-density lipoprotein (HDL) cholesterol and triglyceride levels between T2DM and control samples. Frequencies of the FokI polymorphism (CC, CT and TT) were determined in T2DM and control samples and were not significantly different. Furthermore, there was no significant association between the FokI polymorphism and T2DM or lipid profile. A feed-forward neural network (FNN) was used as a computational platform to predict the persons with diabetes based on the FokI polymorphism, lipid profile, gender and age. The accuracy of prediction reached 88% when all parameters were included, 81% when the FokI polymorphism was excluded, and 72% when lipids were only included. This is the first study investigating the association of the VDR gene FokI polymorphism with T2DM in the Jordanian population, and it showed negative association. Diabetes was predicted with high accuracy based on medical data using an FNN. This highlights the great value of incorporating neural network tools into large medical databases and the ability to predict patient susceptibility to diabetes.
Collapse
|
49
|
Musacchio N, Giancaterini A, Guaita G, Ozzello A, Pellegrini MA, Ponzani P, Russo GT, Zilich R, de Micheli A. Artificial Intelligence and Big Data in Diabetes Care: A Position Statement of the Italian Association of Medical Diabetologists. J Med Internet Res 2020; 22:e16922. [PMID: 32568088 PMCID: PMC7338925 DOI: 10.2196/16922] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 03/09/2020] [Accepted: 04/12/2020] [Indexed: 12/24/2022] Open
Abstract
Since the last decade, most of our daily activities have become digital. Digital health takes into account the ever-increasing synergy between advanced medical technologies, innovation, and digital communication. Thanks to machine learning, we are not limited anymore to a descriptive analysis of the data, as we can obtain greater value by identifying and predicting patterns resulting from inductive reasoning. Machine learning software programs that disclose the reasoning behind a prediction allow for “what-if” models by which it is possible to understand if and how, by changing certain factors, one may improve the outcomes, thereby identifying the optimal behavior. Currently, diabetes care is facing several challenges: the decreasing number of diabetologists, the increasing number of patients, the reduced time allowed for medical visits, the growing complexity of the disease both from the standpoints of clinical and patient care, the difficulty of achieving the relevant clinical targets, the growing burden of disease management for both the health care professional and the patient, and the health care accessibility and sustainability. In this context, new digital technologies and the use of artificial intelligence are certainly a great opportunity. Herein, we report the results of a careful analysis of the current literature and represent the vision of the Italian Association of Medical Diabetologists (AMD) on this controversial topic that, if well used, may be the key for a great scientific innovation. AMD believes that the use of artificial intelligence will enable the conversion of data (descriptive) into knowledge of the factors that “affect” the behavior and correlations (predictive), thereby identifying the key aspects that may establish an improvement of the expected results (prescriptive). Artificial intelligence can therefore become a tool of great technical support to help diabetologists become fully responsible of the individual patient, thereby assuring customized and precise medicine. This, in turn, will allow for comprehensive therapies to be built in accordance with the evidence criteria that should always be the ground for any therapeutic choice.
Collapse
Affiliation(s)
| | - Annalisa Giancaterini
- Diabetology Service, Muggiò Polyambulatory, Azienda Socio Sanitaria Territoriale, Monza, Italy
| | - Giacomo Guaita
- Diabetology, Endocrinology and Metabolic Diseases Service, Azienda Tutela Salute Sardegna-Azienda Socio Sanitaria Locale, Carbonia, Italy
| | - Alessandro Ozzello
- Departmental Structure of Endocrine Diseases and Diabetology, Azienda Sanitaria Locale TO3, Pinerolo, Italy
| | - Maria A Pellegrini
- Italian Association of Diabetologists, Rome, Italy.,New Coram Limited Liability Company, Udine, Italy
| | - Paola Ponzani
- Operative Unit of Diabetology, La Colletta Hospital, Azienda Sanitaria Locale 3, Genova, Italy
| | - Giuseppina T Russo
- Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | | | - Alberto de Micheli
- Associazione dei Cavalieri Italiani del Sovrano Militare Ordine di Malta, Genova, Italy
| |
Collapse
|
50
|
González‐Gonzalo C, Sánchez‐Gutiérrez V, Hernández‐Martínez P, Contreras I, Lechanteur YT, Domanian A, van Ginneken B, Sánchez CI. Evaluation of a deep learning system for the joint automated detection of diabetic retinopathy and age-related macular degeneration. Acta Ophthalmol 2020; 98:368-377. [PMID: 31773912 PMCID: PMC7318689 DOI: 10.1111/aos.14306] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 07/29/2019] [Accepted: 10/31/2019] [Indexed: 01/14/2023]
Abstract
PURPOSE To validate the performance of a commercially available, CE-certified deep learning (DL) system, RetCAD v.1.3.0 (Thirona, Nijmegen, The Netherlands), for the joint automatic detection of diabetic retinopathy (DR) and age-related macular degeneration (AMD) in colour fundus (CF) images on a dataset with mixed presence of eye diseases. METHODS Evaluation of joint detection of referable DR and AMD was performed on a DR-AMD dataset with 600 images acquired during routine clinical practice, containing referable and non-referable cases of both diseases. Each image was graded for DR and AMD by an experienced ophthalmologist to establish the reference standard (RS), and by four independent observers for comparison with human performance. Validation was furtherly assessed on Messidor (1200 images) for individual identification of referable DR, and the Age-Related Eye Disease Study (AREDS) dataset (133 821 images) for referable AMD, against the corresponding RS. RESULTS Regarding joint validation on the DR-AMD dataset, the system achieved an area under the ROC curve (AUC) of 95.1% for detection of referable DR (SE = 90.1%, SP = 90.6%). For referable AMD, the AUC was 94.9% (SE = 91.8%, SP = 87.5%). Average human performance for DR was SE = 61.5% and SP = 97.8%; for AMD, SE = 76.5% and SP = 96.1%. Regarding detection of referable DR in Messidor, AUC was 97.5% (SE = 92.0%, SP = 92.1%); for referable AMD in AREDS, AUC was 92.7% (SE = 85.8%, SP = 86.0%). CONCLUSION The validated system performs comparably to human experts at simultaneous detection of DR and AMD. This shows that DL systems can facilitate access to joint screening of eye diseases and become a quick and reliable support for ophthalmological experts.
Collapse
Affiliation(s)
- Cristina González‐Gonzalo
- A‐eye Research GroupRadboud University Medical CenterNijmegenThe Netherlands,Diagnostic Image Analysis GroupRadboud University Medical CenterNijmegenThe Netherlands,Donders Institute for BrainCognition and BehaviourRadboud University Medical CenterNijmegenThe Netherlands,Department of OphthalmologyRadboud University Medical CenterNijmegenThe Netherlands
| | - Verónica Sánchez‐Gutiérrez
- Department of OphthalmologyUniversity Hospital Ramón y CajalRamón y Cajal Health Research Institute (IRYCIS)MadridSpain
| | - Paula Hernández‐Martínez
- Department of OphthalmologyUniversity Hospital Ramón y CajalRamón y Cajal Health Research Institute (IRYCIS)MadridSpain
| | - Inés Contreras
- Department of OphthalmologyUniversity Hospital Ramón y CajalRamón y Cajal Health Research Institute (IRYCIS)MadridSpain,Clínica RementeríaMadridSpain
| | - Yara T. Lechanteur
- Department of OphthalmologyRadboud University Medical CenterNijmegenThe Netherlands
| | - Artin Domanian
- Department of OphthalmologyRadboud University Medical CenterNijmegenThe Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis GroupRadboud University Medical CenterNijmegenThe Netherlands
| | - Clara I. Sánchez
- A‐eye Research GroupRadboud University Medical CenterNijmegenThe Netherlands,Diagnostic Image Analysis GroupRadboud University Medical CenterNijmegenThe Netherlands,Donders Institute for BrainCognition and BehaviourRadboud University Medical CenterNijmegenThe Netherlands,Department of OphthalmologyRadboud University Medical CenterNijmegenThe Netherlands
| |
Collapse
|