1
|
Ying JN, Li H, Zhang YY, Li WD, Yi QY. Application and progress of artificial intelligence technology in the segmentation of hyperreflective foci in OCT images for ophthalmic disease research. Int J Ophthalmol 2024; 17:1138-1143. [PMID: 38895690 PMCID: PMC11144766 DOI: 10.18240/ijo.2024.06.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/25/2024] [Indexed: 06/21/2024] Open
Abstract
With the advancement of retinal imaging, hyperreflective foci (HRF) on optical coherence tomography (OCT) images have gained significant attention as potential biological biomarkers for retinal neuroinflammation. However, these biomarkers, represented by HRF, present pose challenges in terms of localization, quantification, and require substantial time and resources. In recent years, the progress and utilization of artificial intelligence (AI) have provided powerful tools for the analysis of biological markers. AI technology enables use machine learning (ML), deep learning (DL) and other technologies to precise characterization of changes in biological biomarkers during disease progression and facilitates quantitative assessments. Based on ophthalmic images, AI has significant implications for early screening, diagnostic grading, treatment efficacy evaluation, treatment recommendations, and prognosis development in common ophthalmic diseases. Moreover, it will help reduce the reliance of the healthcare system on human labor, which has the potential to simplify and expedite clinical trials, enhance the reliability and professionalism of disease management, and improve the prediction of adverse events. This article offers a comprehensive review of the application of AI in combination with HRF on OCT images in ophthalmic diseases including age-related macular degeneration (AMD), diabetic macular edema (DME), retinal vein occlusion (RVO) and other retinal diseases and presents prospects for their utilization.
Collapse
Affiliation(s)
- Jia-Ning Ying
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315042, Zhejiang Province, China
- Health Science Center, Ningbo University, Ningbo 315211, Zhejiang Province, China
| | - Hu Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315042, Zhejiang Province, China
- Health Science Center, Ningbo University, Ningbo 315211, Zhejiang Province, China
| | - Yan-Yan Zhang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315042, Zhejiang Province, China
| | - Wen-Die Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315042, Zhejiang Province, China
| | - Quan-Yong Yi
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315042, Zhejiang Province, China
- Health Science Center, Ningbo University, Ningbo 315211, Zhejiang Province, China
| |
Collapse
|
2
|
Chen T, Bai Y, Mao H, Liu S, Xu K, Xiong Z, Ma S, Yang F, Zhao Y. Cross-modality transfer learning with knowledge infusion for diabetic retinopathy grading. Front Med (Lausanne) 2024; 11:1400137. [PMID: 38808141 PMCID: PMC11130363 DOI: 10.3389/fmed.2024.1400137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 04/15/2024] [Indexed: 05/30/2024] Open
Abstract
Background Ultra-wide-field (UWF) fundus photography represents an emerging retinal imaging technique offering a broader field of view, thus enhancing its utility in screening and diagnosing various eye diseases, notably diabetic retinopathy (DR). However, the application of computer-aided diagnosis for DR using UWF images confronts two major challenges. The first challenge arises from the limited availability of labeled UWF data, making it daunting to train diagnostic models due to the high cost associated with manual annotation of medical images. Secondly, existing models' performance requires enhancement due to the absence of prior knowledge to guide the learning process. Purpose By leveraging extensively annotated datasets within the field, which encompass large-scale, high-quality color fundus image datasets annotated at either image-level or pixel-level, our objective is to transfer knowledge from these datasets to our target domain through unsupervised domain adaptation. Methods Our approach presents a robust model for assessing the severity of diabetic retinopathy (DR) by leveraging unsupervised lesion-aware domain adaptation in ultra-wide-field (UWF) images. Furthermore, to harness the wealth of detailed annotations in publicly available color fundus image datasets, we integrate an adversarial lesion map generator. This generator supplements the grading model by incorporating auxiliary lesion information, drawing inspiration from the clinical methodology of evaluating DR severity by identifying and quantifying associated lesions. Results We conducted both quantitative and qualitative evaluations of our proposed method. In particular, among the six representative DR grading methods, our approach achieved an accuracy (ACC) of 68.18% and a precision (pre) of 67.43%. Additionally, we conducted extensive experiments in ablation studies to validate the effectiveness of each component of our proposed method. Conclusion In conclusion, our method not only improves the accuracy of DR grading, but also enhances the interpretability of the results, providing clinicians with a reliable DR grading scheme.
Collapse
Affiliation(s)
- Tao Chen
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Yanmiao Bai
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Haiting Mao
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Shouyue Liu
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Keyi Xu
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Zhouwei Xiong
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Shaodong Ma
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Fang Yang
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| | - Yitian Zhao
- Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of SciencesNingbo, China
| |
Collapse
|
3
|
Chobisa D, Muniyandi A, Sishtla K, Corson TW, Yeo Y. Long-Acting Microparticle Formulation of Griseofulvin for Ocular Neovascularization Therapy. SMALL (WEINHEIM AN DER BERGSTRASSE, GERMANY) 2024; 20:e2306479. [PMID: 37940612 PMCID: PMC10939919 DOI: 10.1002/smll.202306479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 10/19/2023] [Indexed: 11/10/2023]
Abstract
Neovascular age-related macular degeneration (nAMD) is a leading cause of vision loss in older adults. nAMD is treated with biologics targeting vascular endothelial growth factor; however, many patients do not respond to the current therapy. Here, a small molecule drug, griseofulvin (GRF), is used due to its inhibitory effect on ferrochelatase, an enzyme important for choroidal neovascularization (CNV). For local and sustained delivery to the eyes, GRF is encapsulated in microparticles based on poly(lactide-co-glycolide) (PLGA), a biodegradable polymer with a track record in long-acting formulations. The GRF-loaded PLGA microparticles (GRF MPs) are designed for intravitreal application, considering constraints in size, drug loading content, and drug release kinetics. Magnesium hydroxide is co-encapsulated to enable sustained GRF release over >30 days in phosphate-buffered saline with Tween 80. Incubated in cell culture medium over 30 days, the GRF MPs and the released drug show antiangiogenic effects in retinal endothelial cells. A single intravitreal injection of MPs containing 0.18 µg GRF releases the drug over 6 weeks in vivo to inhibit the progression of laser-induced CNV in mice with no abnormality in the fundus and retina. Intravitreally administered GRF MPs prove effective in preventing CNV, providing proof-of-concept toward a novel, cost-effective nAMD therapy.
Collapse
Affiliation(s)
- Dhawal Chobisa
- Department of Industrial and Molecular Pharmaceutics, Purdue University, 575 West Stadium Avenue, West Lafayette, IN, 47907, USA
- Integrated Product Development Organization, Innovation Plaza Dr. Reddy's Laboratories, Hyderabad, 500050, India
| | - Anbukkarasi Muniyandi
- Departments of Pharmacology & Toxicology and Ophthalmology, Indiana University School of Medicine, 1160 West Michigan Street, Indianapolis, IN, 46202, USA
| | - Kamakshi Sishtla
- Departments of Pharmacology & Toxicology and Ophthalmology, Indiana University School of Medicine, 1160 West Michigan Street, Indianapolis, IN, 46202, USA
| | - Timothy W Corson
- Departments of Pharmacology & Toxicology and Ophthalmology, Indiana University School of Medicine, 1160 West Michigan Street, Indianapolis, IN, 46202, USA
| | - Yoon Yeo
- Department of Industrial and Molecular Pharmaceutics, Purdue University, 575 West Stadium Avenue, West Lafayette, IN, 47907, USA
- Weldon School of Biomedical Engineering, Purdue University, 206 S Martin Jischke Dr., West Lafayette, IN, 47907, USA
| |
Collapse
|
4
|
Chen Y, Zhao T, Han M, Chen Y. Gigantol protects retinal pigment epithelial cells against high glucose-induced apoptosis, oxidative stress and inflammation by inhibiting MTDH-mediated NF-kB signaling pathway. Immunopharmacol Immunotoxicol 2024; 46:33-39. [PMID: 37681978 DOI: 10.1080/08923973.2023.2247545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 08/08/2023] [Indexed: 09/09/2023]
Abstract
OBJECTIVE As a frequent complication of diabetes mellitus (DM), diabetic retinopathy (DR) is now one of the major causes of blindness. Recent reports have shown that retinal pigment epithelial cell (RPEC) damage plays an essential part in DR development and progression. This work intended to explore the potential effects of Gigantol on high glucose (HG)-stimulated RPEC damage and identify potential mechanisms. METHODS Cell viability, cell damage, and cell apoptosis were evaluated by CCK-8, lactate dehydrogenase (LDH) and flow cytometry assays. The levels of oxidative stress biomarkers and pro-inflammatory cytokines were assessed using corresponding commercial kits and ELISA. Additionally, the levels of MTDH and NF-kB signaling pathway-related proteins were detected by western blotting. RESULTS Gigantol dose-dependently enhanced cell viability and decreased apoptosis in HG-challenged ARPE-19 cells. Also, Gigantol notably relieved oxidative stress and inflammatory responses in ARPE-19 cells under HG conditions. Gigantol dose-dependently suppressed MTDH expression. In addition, MTDH restoration partially counteracted the protective effects of Gigantol on ARPE-19 cells subject to HG treatment. Mechanically, Gigantol inactivated the NF-kB signaling pathway, which was partly restored after MTDH overexpression. CONCLUSION Our findings suggested that Gigantol protected against HG-induced RPEC damage by inactivating the NF-kB signaling via MTDH inhibition, offering a potent therapeutic drug for DR treatment.
Collapse
Affiliation(s)
- You Chen
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Tong Zhao
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Mengyu Han
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| | - Yi Chen
- Department of Ophthalmology, China-Japan Friendship Hospital, Beijing, China
| |
Collapse
|
5
|
Rajesh AE, Davidson OQ, Lee CS, Lee AY. Artificial Intelligence and Diabetic Retinopathy: AI Framework, Prospective Studies, Head-to-head Validation, and Cost-effectiveness. Diabetes Care 2023; 46:1728-1739. [PMID: 37729502 PMCID: PMC10516248 DOI: 10.2337/dci23-0032] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 07/15/2023] [Indexed: 09/22/2023]
Abstract
Current guidelines recommend that individuals with diabetes receive yearly eye exams for detection of referable diabetic retinopathy (DR), one of the leading causes of new-onset blindness. For addressing the immense screening burden, artificial intelligence (AI) algorithms have been developed to autonomously screen for DR from fundus photography without human input. Over the last 10 years, many AI algorithms have achieved good sensitivity and specificity (>85%) for detection of referable DR compared with human graders; however, many questions still remain. In this narrative review on AI in DR screening, we discuss key concepts in AI algorithm development as a background for understanding the algorithms. We present the AI algorithms that have been prospectively validated against human graders and demonstrate the variability of reference standards and cohort demographics. We review the limited head-to-head validation studies where investigators attempt to directly compare the available algorithms. Next, we discuss the literature regarding cost-effectiveness, equity and bias, and medicolegal considerations, all of which play a role in the implementation of these AI algorithms in clinical practice. Lastly, we highlight ongoing efforts to bridge gaps in AI model data sets to pursue equitable development and delivery.
Collapse
Affiliation(s)
- Anand E. Rajesh
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Oliver Q. Davidson
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Cecilia S. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| | - Aaron Y. Lee
- Department of Ophthalmology, University of Washington, Seattle, WA
- Roger H. and Angie Karalis Johnson Retina Center, Seattle, WA
| |
Collapse
|
6
|
Fernández-Carneado J, Almazán-Moga A, Ramírez-Lamelas DT, Cuscó C, Alonso de la Fuente JI, Pastor JC, López Gálvez MI, Ponsati B. Quantification of Microvascular Lesions in the Central Retinal Field: Could It Predict the Severity of Diabetic Retinopathy? J Clin Med 2023; 12:3948. [PMID: 37373641 DOI: 10.3390/jcm12123948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 06/01/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
Diabetic retinopathy (DR) is a neurodegenerative disease characterized by the presence of microcirculatory lesions. Among them, microaneurysms (MAs) are the first observable hallmark of early ophthalmological changes. The present work aims to study whether the quantification of MAs, hemorrhages (Hmas) and hard exudates (HEs) in the central retinal field could have a predictive value on DR severity. These retinal lesions were quantified in a single field NM-1 of 160 retinographies of diabetic patients from the IOBA's reading center. Samples included different disease severity levels and excluded proliferating forms: no DR (n = 30), mild non-proliferative (n = 30), moderate (n = 50) and severe (n = 50). Quantification of MAs, Hmas, and HEs revealed an increasing trend as DR severity progresses. Differences between severity levels were statistically significant, suggesting that the analysis of the central field provides valuable information on severity level and could be used as a clinical tool to assess DR grading in the eyecare routine. Even though further validation is needed, counting microvascular lesions in a single retinal field can be proposed as a rapid screening system to classify DR patients with different stages of severity according to the international classification.
Collapse
Affiliation(s)
- Jimena Fernández-Carneado
- BCN Peptides, S.A., Polígon Industrial Els Vinyets-Els Fogars II, 08777 Sant Quintí de Mediona, Barcelona, Spain
| | - Ana Almazán-Moga
- BCN Peptides, S.A., Polígon Industrial Els Vinyets-Els Fogars II, 08777 Sant Quintí de Mediona, Barcelona, Spain
| | - Dolores T Ramírez-Lamelas
- BCN Peptides, S.A., Polígon Industrial Els Vinyets-Els Fogars II, 08777 Sant Quintí de Mediona, Barcelona, Spain
| | - Cristina Cuscó
- BCN Peptides, S.A., Polígon Industrial Els Vinyets-Els Fogars II, 08777 Sant Quintí de Mediona, Barcelona, Spain
| | | | - J Carlos Pastor
- IOBA Reading Center, University of Valladolid, Paseo de Belén, 17, 47011 Valladolid, Spain
| | | | - Berta Ponsati
- BCN Peptides, S.A., Polígon Industrial Els Vinyets-Els Fogars II, 08777 Sant Quintí de Mediona, Barcelona, Spain
| |
Collapse
|
7
|
de Oliveira JAE, Nakayama LF, Zago Ribeiro L, de Oliveira TVF, Choi SNJH, Neto EM, Cardoso VS, Dib SA, Melo GB, Regatieri CVS, Malerbi FK. Clinical validation of a smartphone-based retinal camera for diabetic retinopathy screening. Acta Diabetol 2023:10.1007/s00592-023-02105-z. [PMID: 37149834 DOI: 10.1007/s00592-023-02105-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 04/22/2023] [Indexed: 05/08/2023]
Abstract
AIMS This study aims to compare the performance of a handheld fundus camera (Eyer) and standard tabletop fundus cameras (Visucam 500, Visucam 540, and Canon CR-2) for diabetic retinopathy and diabetic macular edema screening. METHODS This was a multicenter, cross-sectional study that included images from 327 individuals with diabetes. The participants underwent pharmacological mydriasis and fundus photography in two fields (macula and optic disk centered) with both strategies. All images were acquired by trained healthcare professionals, de-identified, and graded independently by two masked ophthalmologists, with a third senior ophthalmologist adjudicating in discordant cases. The International Classification of Diabetic Retinopathy was used for grading, and demographic data, diabetic retinopathy classification, artifacts, and image quality were compared between devices. The tabletop senior ophthalmologist adjudication label was used as the ground truth for comparative analysis. A univariate and stepwise multivariate logistic regression was performed to determine the relationship of each independent factor in referable diabetic retinopathy. RESULTS The mean age of participants was 57.03 years (SD 16.82, 9-90 years), and the mean duration of diabetes was 16.35 years (SD 9.69, 1-60 years). Age (P = .005), diabetes duration (P = .004), body mass index (P = .005), and hypertension (P < .001) were statistically different between referable and non-referable patients. Multivariate logistic regression analysis revealed a positive association between male sex (OR 1.687) and hypertension (OR 3.603) with referable diabetic retinopathy. The agreement between devices for diabetic retinopathy classification was 73.18%, with a weighted kappa of 0.808 (almost perfect). The agreement for macular edema was 88.48%, with a kappa of 0.809 (almost perfect). For referable diabetic retinopathy, the agreement was 85.88%, with a kappa of 0.716 (substantial), sensitivity of 0.906, and specificity of 0.808. As for image quality, 84.02% of tabletop fundus camera images were gradable and 85.31% of the Eyer images were gradable. CONCLUSIONS Our study shows that the handheld retinal camera Eyer performed comparably to standard tabletop fundus cameras for diabetic retinopathy and macular edema screening. The high agreement with tabletop devices, portability, and low costs makes the handheld retinal camera a promising tool for increasing coverage of diabetic retinopathy screening programs, particularly in low-income countries. Early diagnosis and treatment have the potential to prevent avoidable blindness, and the present validation study brings evidence that supports its contribution to diabetic retinopathy early diagnosis and treatment.
Collapse
Affiliation(s)
| | - Luis Filipe Nakayama
- Department of Ophthalmology, São Paulo Federal University, São Paulo, SP, Brazil.
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA.
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, São Paulo, SP, Brazil
| | | | | | | | | | - Sergio Atala Dib
- Division of Endocrinology and Metabolism, Sao Paulo Federal University, São Paulo, SP, Brazil
| | | | | | | |
Collapse
|
8
|
Yang Y, Xu F, Chen J, Tao C, Li Y, Chen Q, Tang S, Lee HK, Shen W. Artificial intelligence-assisted smartphone-based sensing for bioanalytical applications: A review. Biosens Bioelectron 2023; 229:115233. [PMID: 36965381 DOI: 10.1016/j.bios.2023.115233] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 02/23/2023] [Accepted: 03/13/2023] [Indexed: 03/18/2023]
Abstract
Artificial intelligence (AI) has received great attention since the concept was proposed, and it has developed rapidly in recent years with applications in many fields. Meanwhile, newer iterations of smartphone hardware technologies which have excellent data processing capabilities have leveraged on AI capabilities. Based on the desirability for portable detection, researchers have been investigating intelligent analysis by combining smartphones with AI algorithms. Various examples of the application of AI algorithm-based smartphone detection and analysis have been developed. In this review, we give an overview of this field, with a particular focus on bioanalytical detection applications. The applications are presented in terms of hardware design, software algorithms, and specific application areas. We also discuss the existing limitations of AI-based smartphone detection and analytical approaches, and their future prospects. The take-home message of our review is that the application of AI in the field of detection analysis is restricted by the limitations of the smartphone's hardware as well as the model building of AI for detection targets with insufficient data. Nevertheless, at this juncture, while bioanalytical diagnostics and health monitoring have set the pace for AI-based smartphone applicability, the future should see the technology making greater inroads into other fields. In relation to the latter, it is likely that the ordinary or average person will play a greater participatory role.
Collapse
Affiliation(s)
- Yizhuo Yang
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China
| | - Fang Xu
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China
| | - Jisen Chen
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China
| | - Chunxu Tao
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China
| | - Yunxin Li
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China
| | - Quansheng Chen
- College of Ocean Food and Biological Engineering, Jimei University, Xiamen, 361021, Fujian Province, China
| | - Sheng Tang
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China.
| | - Hian Kee Lee
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China; Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore, 117543, Singapore.
| | - Wei Shen
- School of Environmental and Chemical Engineering, Jiangsu University of Science and Technology, Zhenjiang, 212003, Jiangsu Province, China.
| |
Collapse
|
9
|
Ohta Y, Tateishi E, Morita Y, Nishii T, Kotoku A, Horinouchi H, Fukuyama M, Fukuda T. Optimization of null point in Look-Locker images for myocardial late gadolinium enhancement imaging using deep learning and a smartphone. Eur Radiol 2023:10.1007/s00330-023-09465-8. [PMID: 36809433 DOI: 10.1007/s00330-023-09465-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 09/27/2022] [Accepted: 01/22/2023] [Indexed: 02/23/2023]
Abstract
OBJECTIVES To determine the optimal inversion time (TI) from Look-Locker scout images using a convolutional neural network (CNN) and to investigate the feasibility of correcting TI using a smartphone. METHODS In this retrospective study, TI-scout images were extracted using a Look-Locker approach from 1113 consecutive cardiac MR examinations performed between 2017 and 2020 with myocardial late gadolinium enhancement. Reference TI null points were independently determined visually by an experienced radiologist and an experienced cardiologist, and quantitatively measured. A CNN was developed to evaluate deviation of TI from the null point and then implemented in PC and smartphone applications. Images on 4 K or 3-megapixel monitors were captured by a smartphone, and CNN performance on each monitor was determined. Optimal, undercorrection, and overcorrection rates using deep learning on the PC and smartphone were calculated. For patient analysis, TI category differences in pre- and post-correction were evaluated using the TI null point used in late gadolinium enhancement imaging. RESULTS For PC, 96.4% (772/749) of images were classified as optimal, with under- and overcorrection rates of 1.2% (9/749) and 2.4% (18/749), respectively. For 4 K images, 93.5% (700/749) of images were classified as optimal, with under- and overcorrection rates of 3.9% (29/749) and 2.7% (20/749), respectively. For 3-megapixel images, 89.6% (671/749) of images were classified as optimal, with under- and overcorrection rates of 3.3% (25/749) and 7.0% (53/749), respectively. On patient-based evaluations, subjects classified as within optimal range increased from 72.0% (77/107) to 91.6% (98/107) using the CNN. CONCLUSIONS Optimizing TI on Look-Locker images was feasible using deep learning and a smartphone. KEY POINTS • A deep learning model corrected TI-scout images to within optimal null point for LGE imaging. • By capturing the TI-scout image on the monitor with a smartphone, the deviation of the TI from the null point can be immediately determined. • Using this model, TI null points can be set to the same degree as that by an experienced radiological technologist.
Collapse
Affiliation(s)
- Yasutoshi Ohta
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan.
| | - Emi Tateishi
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| | - Yoshiaki Morita
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| | - Tatsuya Nishii
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| | - Akiyuki Kotoku
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| | - Hiroki Horinouchi
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| | - Midori Fukuyama
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| | - Tetsuya Fukuda
- Department of Radiology, National Cerebral and Cardiovascular Center, Suita City, Osaka, 564-8565, Japan
| |
Collapse
|
10
|
Tantawy NM, Sherif EM, Matter RM, Salah NY, Abozeid NEH, Atif HM. Assessment of fibroblast growth factor 21 in children with type 1 diabetes mellitus in relation to microvascular complications. Pediatr Endocrinol Diabetes Metab 2023; 29:64-74. [PMID: 37728457 PMCID: PMC10411091 DOI: 10.5114/pedm.2022.121372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 07/26/2022] [Indexed: 09/21/2023]
Abstract
INTRODUCTION Type 1 diabetes mellitus (DM1) represents a growing global health problem with significant morbidity. Fibroblast growth factor 21 (FGF21) is an adipokine expressed predominantly in the liver that plays an important role in metabolic regulation. AIM OF THE STUDY This study assesses FGF21 levels in children with DM1, in comparison to controls, and correlates them with diabetes duration, glycated haemoglobin (HbA1c), and diabetic microvascular complications. MATERIAL AND METHODS Fifty children with DM1, aged between 5 and 16 years, were studied regarding their diabetes duration, HbA1c, urinary albumin creatinine ratio (UACR), fundus, and FGF21 level. They were compared to 50 healthy controls. RESULTS The median FGF21 of the studied children with DM1 was 150 pg/ml, range 50-350 pg/ml; while that of the controls was 35 pg/ml, range 20-50 pg/ml. FGF21 level was significantly higher in children with DM1 than in controls ( p < 0.001). Moreover, it was significantly and positively correlated with diabetes duration, mean blood glucose level, and HbA1c ( p < 0.001, p = 0.015, p = 0.018, respectively). Interestingly, the FGF21 level was not significantly elevated in children with DM1 having diabetic nephropathy and retinopathy ( p = 0.122, p = 0.298, respectively). CONCLUSIONS FGF21 is significantly higher among children with DM1 than in controls. However, its role in diabetic microvascular complica-tions needs further assessment.
Collapse
Affiliation(s)
- Nermien M. Tantawy
- Paediatrics Department, Paediatric and Adolescent Diabetes Unit, Faculty of Medicine, Ain Shams University, Cairo, Egypt
| | - Eman M. Sherif
- Paediatrics Department, Paediatric and Adolescent Diabetes Unit, Faculty of Medicine, Ain Shams University, Cairo, Egypt
| | - Randa M. Matter
- Paediatrics Department, Paediatric and Adolescent Diabetes Unit, Faculty of Medicine, Ain Shams University, Cairo, Egypt
| | - Nouran Y. Salah
- Paediatrics Department, Paediatric and Adolescent Diabetes Unit, Faculty of Medicine, Ain Shams University, Cairo, Egypt
| | | | - Heba M. Atif
- Clinical Pathology Department, Faculty of Medicine, Ain Shams University, Cairo, Egypt
| |
Collapse
|
11
|
Mokhashi N, Grachevskaya J, Cheng L, Yu D, Lu X, Zhang Y, Henderer JD. A Comparison of Artificial Intelligence and Human Diabetic Retinal Image Interpretation in an Urban Health System. J Diabetes Sci Technol 2022; 16:1003-1007. [PMID: 33719599 PMCID: PMC9264425 DOI: 10.1177/1932296821999370] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
INTRODUCTION Artificial intelligence (AI) diabetic retinopathy (DR) software has the potential to decrease time spent by clinicians on image interpretation and expand the scope of DR screening. We performed a retrospective review to compare Eyenuk's EyeArt software (Woodland Hills, CA) to Temple Ophthalmology optometry grading using the International Classification of Diabetic Retinopathy scale. METHODS Two hundred and sixty consecutive diabetic patients from the Temple Faculty Practice Internal Medicine clinic underwent 2-field retinal imaging. Classifications of the images by the software and optometrist were analyzed using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and McNemar's test. Ungradable images were analyzed to identify relationships with HbA1c, age, and ethnicity. Disagreements and a sample of 20% of agreements were adjudicated by a retina specialist. RESULTS On patient level comparison, sensitivity for the software was 100%, while specificity was 77.78%. PPV was 19.15%, and NPV was 100%. The 38 disagreements between software and optometrist occurred when the optometrist classified a patient's images as non-referable while the software classified them as referable. Of these disagreements, a retina specialist agreed with the optometrist 57.9% the time (22/38). Of the agreements, the retina specialist agreed with both the program and the optometrist 96.7% of the time (28/29). There was a significant difference in numbers of ungradable photos in older patients (≥60) vs younger patients (<60) (p=0.003). CONCLUSIONS The AI program showed high sensitivity with acceptable specificity for a screening algorithm. The high NPV indicates that the software is unlikely to miss DR but may refer patients unnecessarily.
Collapse
Affiliation(s)
- Nikita Mokhashi
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
- Nikita Mokhashi, BA, Department of
Ophthalmology, Temple Ophthalmology, Lewis Katz School of Medicine, 3401 North
Broad Street, Philadelphia, PA 19140, USA.
| | - Julia Grachevskaya
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
| | - Lorrie Cheng
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
| | - Daohai Yu
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
| | - Xiaoning Lu
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
| | - Yi Zhang
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
| | - Jeffrey D. Henderer
- Department of Ophthalmology, Lewis Katz
School of Medicine, Philadelphia, PA, USA
| |
Collapse
|
12
|
Shenkut D, Bhagavatula V. Fundus GAN - GAN-based Fundus Image Synthesis for Training Retinal Image Classifiers. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2185-2189. [PMID: 36086632 DOI: 10.1109/embc48229.2022.9871771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Two major challenges in applying deep learning to develop a computer-aided diagnosis of fundus images are the lack of enough labeled data and legal issues with patient privacy. Various efforts are being made to increase the amount of data either by augmenting training images or by synthesizing realistic-looking fundus images. However, augmentation is limited by the amount of available data and it does not address the patient privacy concern. In this paper, we propose a Generative Adversarial Network-based (GAN-based) fundus image synthesis method (Fundus GAN) that generates synthetic training images to solve the above problems. Fundus GAN is an improved way of generating retinal images by following a two-step generation process which involves first training a segmentation network to extract the vessel tree followed by vessel tree to fundus image-to-image translation using unsupervised generative attention networks. Our results show that the proposed Fundus GAN outperforms state of the art methods in different evaluation metrics. Our results also validate that generated retinal images can be used to train retinal image classifiers for eye diseases diagnosis. Clinical Relevance- Our proposed method Fundus GAN helps in solving the shortage of patient privacy-preserving training data in developing algorithms for automating image- based eye disease diagnosis. The proposed two-step GAN- based image synthesis can be used to improve the classification accuracy of retinal image classifiers without compromising the privacy of the patient.
Collapse
|
13
|
A computer-aided diagnosis system for detecting various diabetic retinopathy grades based on a hybrid deep learning technique. Med Biol Eng Comput 2022; 60:2015-2038. [PMID: 35545738 PMCID: PMC9225981 DOI: 10.1007/s11517-022-02564-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 03/25/2022] [Indexed: 12/23/2022]
Abstract
Diabetic retinopathy (DR) is a serious disease that may cause vision loss unawares without any alarm. Therefore, it is essential to scan and audit the DR progress continuously. In this respect, deep learning techniques achieved great success in medical image analysis. Deep convolution neural network (CNN) architectures are widely used in multi-label (ML) classification. It helps in diagnosing normal and various DR grades: mild, moderate, and severe non-proliferative DR (NPDR) and proliferative DR (PDR). DR grades are formulated by appearing multiple DR lesions simultaneously on the color retinal fundus images. Many lesion types have various features that are difficult to segment and distinguished by utilizing conventional and hand-crafted methods. Therefore, the practical solution is to utilize an effective CNN model. In this paper, we present a novel hybrid, deep learning technique, which is called E-DenseNet. We integrated EyeNet and DenseNet models based on transfer learning. We customized the traditional EyeNet by inserting the dense blocks and optimized the resulting hybrid E-DensNet model's hyperparameters. The proposed system based on the E-DenseNet model can accurately diagnose healthy and different DR grades from various small and large ML color fundus images. We trained and tested our model on four different datasets that were published from 2006 to 2019. The proposed system achieved an average accuracy (ACC), sensitivity (SEN), specificity (SPE), Dice similarity coefficient (DSC), the quadratic Kappa score (QKS), and the calculation time (T) in minutes (m) equal [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], 0.883, and 3.5m respectively. The experiments show promising results as compared with other systems.
Collapse
|
14
|
Multiple Ocular Disease Diagnosis Using Fundus Images Based on Multi-Label Deep Learning Classification. ELECTRONICS 2022. [DOI: 10.3390/electronics11131966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Designing computer-aided diagnosis (CAD) systems that can automatically detect ocular diseases (ODs) has become an active research field in the health domain. Although the human eye might have more than one OD simultaneously, most existing systems are designed to detect specific eye diseases. Therefore, it is crucial to develop new CAD systems that can detect multiple ODs simultaneously. This paper presents a novel multi-label convolutional neural network (ML-CNN) system based on ML classification (MLC) to diagnose various ODs from color fundus images. The proposed ML-CNN-based system consists of three main phases: the preprocessing phase, which includes normalization and augmentation using several transformation processes, the modeling phase, and the prediction phase. The proposed ML-CNN consists of three convolution (CONV) layers and one max pooling (MP) layer. Then, two CONV layers are performed, followed by one MP and dropout (DO). After that, one flatten layer is performed, followed by one fully connected (FC) layer. We added another DO once again, and finally, one FC layer with 45 nodes is performed. The system outputs the probabilities of all 45 diseases in each image. We validated the model by using cross-validation (CV) and measured the performance by five different metrics: accuracy (ACC), recall, precision, Dice similarity coefficient (DSC), and area under the curve (AUC). The results are 94.3%, 80%, 91.5%, 99%, and 96.7%, respectively. The comparisons with the existing built-in models, such as MobileNetV2, DenseNet201, SeResNext50, InceptionV3, and InceptionresNetv2, demonstrate the superiority of the proposed ML-CNN model.
Collapse
|
15
|
Hao Z, Xu R, Huang X, Ren X, Li H, Shao H. Application and observation of artificial intelligence in clinical practice of fundus screening for diabetic retinopathy with non-mydriatic fundus photography: a retrospective observational study of T2DM patients in Tianjin, China. Ther Adv Chronic Dis 2022; 13:20406223221097335. [PMID: 35620186 PMCID: PMC9127849 DOI: 10.1177/20406223221097335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 04/12/2022] [Indexed: 11/18/2022] Open
Abstract
Objective: To observe the consistency of a preliminary report of artificial intelligence (AI) in the clinical practice of fundus screening for diabetic retinopathy (DR) using non-mydriatic fundus photography. Methods: Patients who underwent DR screening in the Metabolic Disease Management Center (MMC) of our hospital were selected as research participants. The degree of coincidence of the AI preliminary report and the ophthalmic diagnosis was compared and analyzed, and the kappa value was calculated. Fundus fluorescein angiography (FFA) was performed in patients referred to the out-of-hospital ophthalmology department, and the consistency between fluorescein angiography and AI diagnosis was evaluated. Results: In total, 6146 patients (12,263 eyes) completed the non-mydriasis fundus examination. The positive DR screening rate was 24.3%. When considering moderate nonproliferative retinopathy as the cut-off point, the kappa coefficient was 0.75 (p < 0.001), the sensitivity was 0.973, and the precision was 0.642, which was shown in the precision–recall curve. Fifty-nine patients referred to receive FFA were compared with non-mydriatic AI diagnoses. The kappa coefficient was 0.53, and the coincidence rate was 66.9%. Conclusion: Non-mydriasis fundus examination combined with AI has a medium-high consistency with ophthalmologists in DR diagnosis, conducive to early DR screening. Combining diagnosis and treatment modes with the Internet can promote the development of telemedicine, alleviate the shortage of ophthalmology resources, and promote the process of blindness prevention and treatment projects.
Collapse
Affiliation(s)
- Zhaohu Hao
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, Tianjin, China
| | - Rong Xu
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, Tianjin, China
| | - Xiao Huang
- NHC Key Laboratory of Hormones and Development, Tianjin Key Laboratory of Metabolic Diseases, Chu Hsien-I Memorial Hospital & Tianjin Institute of Endocrinology, Tianjin Medical University, Tianjin, China
| | - Xinjun Ren
- Tianjin Key Laboratory of Retinal Functions and Diseases, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Huanming Li
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, Tianjin 300140, China
| | - Hailin Shao
- Department of Metabolic Disease Management Center, Tianjin 4th Central Hospital, Tianjin 300140, China
| |
Collapse
|
16
|
Handheld Fundus Camera for Diabetic Retinopathy Screening: A Comparison Study with Table-Top Fundus Camera in Real-Life Setting. J Clin Med 2022; 11:jcm11092352. [PMID: 35566478 PMCID: PMC9103652 DOI: 10.3390/jcm11092352] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 04/19/2022] [Accepted: 04/21/2022] [Indexed: 02/05/2023] Open
Abstract
The aim of the study was to validate the performance of the Optomed Aurora® handheld fundus camera in diabetic retinopathy (DR) screening. Patients who were affected by diabetes mellitus and referred to the local DR screening service underwent fundus photography using a standard table-top fundus camera and the Optomed Aurora® handheld fundus camera. All photos were taken by a single, previously unexperienced operator. Among 423 enrolled eyes, we found a prevalence of 3.55% and 3.31% referable cases with the Aurora® and with the standard table-top fundus camera, respectively. The Aurora® obtained a sensitivity of 96.9% and a specificity of 94.8% in recognizing the presence of any degree of DR, a sensitivity of 100% and a specificity of 99.8% for any degree of diabetic maculopathy (DM) and a sensitivity of 100% and specificity of 99.8% for referable cases. The overall concordance coefficient k (95% CI) was 0.889 (0.828–0.949) and 0.831 (0.658–1.004) with linear weighting for DR and DM, respectively. The presence of hypertensive retinopathy (HR) was recognized by the Aurora® with a sensitivity and specificity of 100%. The Optomed Aurora® handheld fundus camera proved to be effective in recognizing referable cases in a real-life DR screening setting. It showed comparable results to a standard table-top fundus camera in DR, DM and HR detection and grading. The Aurora® can be integrated into telemedicine solutions and artificial intelligence services which, in addition to its portability and ease of use, make it particularly suitable for DR screening.
Collapse
|
17
|
Gajiwala UR, Pachchigar S, Patel D, Mistry I, Oza Y, Kundaria D, B R S. Non-mydriatic fundus photography as an alternative to indirect ophthalmoscopy for screening of diabetic retinopathy in community settings: a comparative pilot study in rural and tribal India. BMJ Open 2022; 12:e058485. [PMID: 35396308 PMCID: PMC8995946 DOI: 10.1136/bmjopen-2021-058485] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVES The impending and increasing prevalence of diabetic retinopathy (DR) in India has necessitated a need for affordable and valid community outreach screening programme for DR, especially in rural and far to reach indigenous local communities. The present study is a pilot study aimed to compare non-mydriatic fundus photography with indirect ophthalmoscopy for its utilisation as a feasible and logistically convenient screening modality for DR in an older age, rural, tribal population in Western India. DESIGN AND SETTING This community-based, cross-sectional, prospective population study was a part of a module using Rapid Assessment of Avoidable Blindness and DR methodology in 8340 sampled participants with ≥50 years age. In this study, the diabetics identified were screened for DR using two methods: non-mydriatic fundus photography on the field by trained professionals, that were then graded by a retina specialist at the base hospital and indirect ophthalmoscopy by expert ophthalmologists in the field with masking of each other's findings for its utility and comparison. RESULTS The prevalence of DR, sight threatening DR and maculopathy using indirect ophthalmoscopy was found to be 12.1%, 2.1% and 6.6%, respectively. A fair agreement (κ=0.48 for DR and 0.59 for maculopathy) was observed between both the detection methods. The sensitivity and specificity of fundus photographic evaluation compared with indirect ophthalmoscopy were found to be 54.8% and 92.1% (for DR), 60.7% and 90.8% (for any DR) and 84.2% and 94.8% (for only maculopathy), respectively. CONCLUSION Non-mydriatic fundus photography has the potential to identify DR (any retinopathy or maculopathy) in community settings in Indian population. Its utility as an affordable and logistically convenient cum practical modality is demonstrable. The sensitivity of this screening modality can be further increased by investing in better resolution cameras, capturing quality images and training and validation of imagers. TRIAL REGISTRATION NUMBER CTRI/2020/01/023025; Clinical Trial Registry, India (CTRI).
Collapse
Affiliation(s)
| | | | - Dhaval Patel
- Retina Department, Divyajyoti Trust, Surat, Gujarat, India
| | - Ishwar Mistry
- General Ophthalmology Department, Divyajyoti Trust, Surat, Gujarat, India
| | - Yash Oza
- General Ophthalmology Department, Divyajyoti Trust, Surat, Gujarat, India
| | - Dhaval Kundaria
- General Ophthalmology Department, Divyajyoti Trust, Surat, Gujarat, India
| | - Shamanna B R
- School of Medical Science, University of Hyderabad, Hyderabad, Telangana, India
| |
Collapse
|
18
|
Hervella ÁS, Rouco J, Novo J, Ortega M. Multimodal image encoding pre-training for diabetic retinopathy grading. Comput Biol Med 2022; 143:105302. [PMID: 35219187 DOI: 10.1016/j.compbiomed.2022.105302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 01/11/2022] [Accepted: 01/26/2022] [Indexed: 11/18/2022]
Abstract
Diabetic retinopathy is an increasingly prevalent eye disorder that can lead to severe vision impairment. The severity grading of the disease using retinal images is key to provide an adequate treatment. However, in order to learn the diverse patterns and complex relations that are required for the grading, deep neural networks require very large annotated datasets that are not always available. This has been typically addressed by reusing networks that were pre-trained for natural image classification, hence relying on additional annotated data from a different domain. In contrast, we propose a novel pre-training approach that takes advantage of unlabeled multimodal visual data commonly available in ophthalmology. The use of multimodal visual data for pre-training purposes has been previously explored by training a network in the prediction of one image modality from another. However, that approach does not ensure a broad understanding of the retinal images, given that the network may exclusively focus on the similarities between modalities while ignoring the differences. Thus, we propose a novel self-supervised pre-training that explicitly teaches the networks to learn the common characteristics between modalities as well as the characteristics that are exclusive to the input modality. This provides a complete comprehension of the input domain and facilitates the training of downstream tasks that require a broad understanding of the retinal images, such as the grading of diabetic retinopathy. To validate and analyze the proposed approach, we performed an exhaustive experimentation on different public datasets. The transfer learning performance for the grading of diabetic retinopathy is evaluated under different settings while also comparing against previous state-of-the-art pre-training approaches. Additionally, a comparison against relevant state-of-the-art works for the detection and grading of diabetic retinopathy is also provided. The results show a satisfactory performance of the proposed approach, which outperforms previous pre-training alternatives in the grading of diabetic retinopathy.
Collapse
Affiliation(s)
- Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| |
Collapse
|
19
|
Gutfleisch M, Ester O, Aydin S, Quassowski M, Spital G, Lommatzsch A, Rothaus K, Dubis AM, Pauleikhoff D. Clinically applicable deep learning-based decision aids for treatment of neovascular AMD. Graefes Arch Clin Exp Ophthalmol 2022; 260:2217-2230. [DOI: 10.1007/s00417-022-05565-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 01/22/2023] Open
|
20
|
Shafieibavani E, Goudey B, Kiral I, Zhong P, Jimeno-Yepes A, Swan A, Gambhir M, Buechner A, Kludt E, Eikelboom RH, Sucher C, Gifford RH, Rottier R, Plant K, Anjomshoa H. Predictive models for cochlear implant outcomes: Performance, generalizability, and the impact of cohort size. Trends Hear 2021; 25:23312165211066174. [PMID: 34903103 PMCID: PMC8764462 DOI: 10.1177/23312165211066174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
While cochlear implants have helped hundreds of thousands of individuals, it
remains difficult to predict the extent to which an individual’s hearing will
benefit from implantation. Several publications indicate that machine learning
may improve predictive accuracy of cochlear implant outcomes compared to
classical statistical methods. However, existing studies are limited in terms of
model validation and evaluating factors like sample size on predictive
performance. We conduct a thorough examination of machine learning approaches to
predict word recognition scores (WRS) measured approximately 12 months after
implantation in adults with post-lingual hearing loss. This is the largest
retrospective study of cochlear implant outcomes to date, evaluating 2,489
cochlear implant recipients from three clinics. We demonstrate that while
machine learning models significantly outperform linear models in prediction of
WRS, their overall accuracy remains limited (mean absolute error: 17.9-21.8).
The models are robust across clinical cohorts, with predictive error increasing
by at most 16% when evaluated on a clinic excluded from the training set. We
show that predictive improvement is unlikely to be improved by increasing sample
size alone, with doubling of sample size estimated to only increasing
performance by 3% on the combined dataset. Finally, we demonstrate how the
current models could support clinical decision making, highlighting that subsets
of individuals can be identified that have a 94% chance of improving WRS by at
least 10% points after implantation, which is likely to be clinically
meaningful. We discuss several implications of this analysis, focusing on the
need to improve and standardize data collection.
Collapse
Affiliation(s)
| | - Benjamin Goudey
- 127113IBM Research Australia, Southbank, Victoria, Australia.,School of Computing and Information Systems, University of Melbourne, Parkville, Victoria, Australia
| | - Isabell Kiral
- 127113IBM Research Australia, Southbank, Victoria, Australia
| | - Peter Zhong
- 127113IBM Research Australia, Southbank, Victoria, Australia
| | | | - Annalisa Swan
- 127113IBM Research Australia, Southbank, Victoria, Australia
| | - Manoj Gambhir
- 127113IBM Research Australia, Southbank, Victoria, Australia
| | - Andreas Buechner
- 9177Medizinische Hochschule Hannover, Hannover, Niedersachsen, Germany
| | - Eugen Kludt
- 9177Medizinische Hochschule Hannover, Hannover, Niedersachsen, Germany
| | - Robert H Eikelboom
- 104182Ear Science Institute Australia, Subiaco, Western Australia, Australia.,Ear Sciences Centre, The University of Western Australia, Nedlands, Western Australia, Australia.,Department of Speech Language Pathology and Audiology, University of Pretoria, South Africa
| | - Cathy Sucher
- 104182Ear Science Institute Australia, Subiaco, Western Australia, Australia.,Ear Sciences Centre, The University of Western Australia, Nedlands, Western Australia, Australia
| | - Rene H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States of America
| | | | - Kerrie Plant
- 104148Cochlear Limited, New South Wales, Australia
| | - Hamideh Anjomshoa
- 127113IBM Research Australia, Southbank, Victoria, Australia.,School of Mathematics and Statistics, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
21
|
Kubin A, Wirkkala J, Keskitalo A, Ohtonen P, Hautala N. Handheld fundus camera performance, image quality and outcomes of diabetic retinopathy grading in a pilot screening study. Acta Ophthalmol 2021; 99:e1415-e1420. [PMID: 33724706 DOI: 10.1111/aos.14850] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 02/23/2021] [Indexed: 12/19/2022]
Abstract
PURPOSE To compare the performance and image quality of the handheld fundus camera to standard table-top fundus cameras in diabetic retinopathy (DR) screening. The reliability and diagnostic accuracy of DR grading performed by an ophthalmologist and a photographer reader were evaluated. MATERIALS AND METHODS 157 patients with diabetes, attending screening or follow-up of DR, were evaluated by fundus photographs taken in mydriasis by Optomed Aurora and Canon or Zeiss Visucam fundus cameras. The image quality and the severity of DR were evaluated independently by an ophthalmologist and experienced photographer. The sensitivity, specificity and reliability of the assessments were determined. RESULTS 1884 fundus images from 314 eyes were analysed. In 53% of all eyes, DR was not present. 10% had mild non-proliferative diabetic retinopathy (NPDR), 16% moderate NPDR, 6% severe NPDR and 16% proliferative diabetic retinopathy (PDR). The DR grading outcomes by Aurora highly equalled to those of Canon or Zeiss (κ = 0.93, 95% CI 0.91 to 0.94), and there was almost perfect agreement in grading between the ophthalmologist and photographer (κ = 0.96, 95% CI 0.95 to 0.97). The image quality of Aurora was sufficient for reliable assessment according to both graders in 84-88% of the cases. CONCLUSION The Optomed Aurora fundus camera seems appropriate for DR screening. The sufficient image quality and high diagnostic accuracy for DR grading are supportive for a less expensive and easily transportable screening system for DR. Immediate image grading carried out by a photographer would further improve and speed up the screening process in all settings.
Collapse
Affiliation(s)
- Anna‐Maria Kubin
- Department of Ophthalmology PEDEGO Research Unit and Medical Research Center Oulu University Oulu Finland
- Oulu University Hospital Oulu Finland
- Division of Operative Care Oulu University Hospital Oulu Finland
| | - Joonas Wirkkala
- Department of Ophthalmology PEDEGO Research Unit and Medical Research Center Oulu University Oulu Finland
- Oulu University Hospital Oulu Finland
- Division of Operative Care Oulu University Hospital Oulu Finland
| | - Antti Keskitalo
- Oulu University Hospital Oulu Finland
- Division of Operative Care Oulu University Hospital Oulu Finland
| | - Pasi Ohtonen
- Division of Operative Care Oulu University Hospital Oulu Finland
| | - Nina Hautala
- Department of Ophthalmology PEDEGO Research Unit and Medical Research Center Oulu University Oulu Finland
- Oulu University Hospital Oulu Finland
- Division of Operative Care Oulu University Hospital Oulu Finland
| |
Collapse
|
22
|
Al-Aswad LA, Elgin CY, Patel V, Popplewell D, Gopal K, Gong D, Thomas Z, Joiner D, Chu CK, Walters S, Ramachandran M, Kapoor R, Rodriguez M, Alcantara-Castillo J, Maestre GE, Lee JH, Moazami G. Real-Time Mobile Teleophthalmology for the Detection of Eye Disease in Minorities and Low Socioeconomics At-Risk Populations. Asia Pac J Ophthalmol (Phila) 2021; 10:461-472. [PMID: 34582428 PMCID: PMC8794049 DOI: 10.1097/apo.0000000000000416] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
PURPOSE To examine the benefits and feasibility of a mobile, real-time, community-based, teleophthalmology program for detecting eye diseases in the New York metro area. DESIGN Single site, nonrandomized, cross-sectional, teleophthalmologic study. METHODS Participants underwent a comprehensive evaluation in a Wi-Fi-equipped teleophthalmology mobile unit. The evaluation consisted of a basic anamnesis with a questionnaire form, brief systemic evaluations and an ophthalmologic evaluation that included visual field, intraocular pressure, pachymetry, anterior segment optical coherence tomography, posterior segment optical coherence tomography, and nonmydriatic fundus photography. The results were evaluated in real-time and follow-up calls were scheduled to complete a secondary questionnaire form. Risk factors were calculated for different types of ophthalmological referrals. RESULTS A total of 957 participants were screened. Out of 458 (48%) participants that have been referred, 305 (32%) had glaucoma, 136 (14%) had narrow-angle, 124 (13%) had cataract, 29 had (3%) diabetic retinopathy, 9 (1%) had macular degeneration, and 97 (10%) had other eye disease findings. Significant risk factors for ophthalmological referral consisted of older age, history of high blood pressure, diabetes mellitus, Hemoglobin A1c measurement of ≥6.5, and stage 2 hypertension. As for the ocular parameters, all but central corneal thickness were found to be significant, including having an intraocular pressure >21 mm Hg, vertical cup-to-disc ratio ≥0.5, visual field abnormalities, and retinal nerve fiber layer thinning. CONCLUSIONS Mobile, real-time teleophthalmology is both workable and effective in increasing access to care and identifying the most common causes of blindness and their risk factors.
Collapse
Affiliation(s)
- Lama A. Al-Aswad
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | - Cansu Yuksel Elgin
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | - Vipul Patel
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | | | | | | | | | | | | | | | | | | | - Maribel Rodriguez
- New York University (NYU) Grossman school of Medicine, NYU Langone Health, NY, US
| | | | | | | | | |
Collapse
|
23
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|
24
|
Abstract
Diabetic retinopathy (DR) is a vision-threatening microvascular complication of diabetes and the leading cause of blindness in working-age people. At the beginning of the metabolic disorder and in early stages of DR the patient's eyesight is often not affected. Depending on the duration of diabetes and in more advanced stages of DR the vision is compromised through the presence of diabetic macular edema (DME) and/or proliferative retinal complications. The management of DR comprises regular ophthalmic examinations according to clinical guidelines, the targeted application of multimodal imaging, and the specific treatment of DME and proliferative DR including secondary disorders such as neovascular glaucoma or persistent vitreous haemorrhage. Innovative ocular imaging techniques like optical coherence tomography (OCT), OCT angiography (OCT-A) and ultrawide field imaging play an important role in the assessment of diabetic patients. Various non-invasive imaging modalities have become part of the routine clinical work-up and help to identify new biomarkers for early diagnosis and long-term prognosis. In early stages of DR, the multifactorial intervention including glucose level and blood pressure control as well as optimizing the patient's cardiovascular risk profile is essential. A specific ophthalmic therapy is available for DME and proliferative DR (PDR). In patients with PDR the treatment regime includes panretinal laser photocoagulation or alternatively intravitreal anti-VEGF (vascular endothelial growth factor)-injections accompanied by close-meshed clinical monitoring. In patients with both, DME and PDR, it is suggested to start with Anti-VEGF drugs. In severe PDR with persistent vitreous haemorrhage, tractional maculopathy or tractional retinal detachment vitreoretinal surgery is recommended.
Collapse
|
25
|
Barth T, Helbig H. Diabetische Retinopathie. AUGENHEILKUNDE UP2DATE 2021. [DOI: 10.1055/a-1262-3160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
ZusammenfassungDie diabetische Retinopathie (DR) ist die häufigste Ursache für schwere
Sehbehinderung und Erblindung im erwerbstätigen Alter. Eine subjektive
Beeinträchtigung des Sehvermögens tritt häufig erst in fortgeschrittenen Stadien
der DR ein. Daher sind Screening-Maßnahmen asymptomatischer Patienten und eine
stadiengerechte Behandlung essenziell. Dieser Beitrag gibt einen praxisbezogenen
Überblick über diagnostische und therapeutische Prinzipien der
nicht-proliferativen und proliferativen Form.
Collapse
|
26
|
Neovascularization Detection and Localization in Fundus Images Using Deep Learning. SENSORS 2021; 21:s21165327. [PMID: 34450766 PMCID: PMC8399593 DOI: 10.3390/s21165327] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 08/02/2021] [Accepted: 08/04/2021] [Indexed: 01/12/2023]
Abstract
Proliferative Diabetic Retinopathy (PDR) is a severe retinal disease that threatens diabetic patients. It is characterized by neovascularization in the retina and the optic disk. PDR clinical features contain highly intense retinal neovascularization and fibrous spreads, leading to visual distortion if not controlled. Different image processing techniques have been proposed to detect and diagnose neovascularization from fundus images. Recently, deep learning methods are getting popular in neovascularization detection due to artificial intelligence advancement in biomedical image processing. This paper presents a semantic segmentation convolutional neural network architecture for neovascularization detection. First, image pre-processing steps were applied to enhance the fundus images. Then, the images were divided into small patches, forming a training set, a validation set, and a testing set. A semantic segmentation convolutional neural network was designed and trained to detect the neovascularization regions on the images. Finally, the network was tested using the testing set for performance evaluation. The proposed model is entirely automated in detecting and localizing neovascularization lesions, which is not possible with previously published methods. Evaluation results showed that the model could achieve accuracy, sensitivity, specificity, precision, Jaccard similarity, and Dice similarity of 0.9948, 0.8772, 0.9976, 0.8696, 0.7643, and 0.8466, respectively. We demonstrated that this model could outperform other convolutional neural network models in neovascularization detection.
Collapse
|
27
|
Pujari A, Saluja G, Agarwal D, Sinha A, P R A, Kumar A, Sharma N. Clinical Role of Smartphone Fundus Imaging in Diabetic Retinopathy and Other Neuro-retinal Diseases. Curr Eye Res 2021; 46:1605-1613. [PMID: 34325587 DOI: 10.1080/02713683.2021.1958347] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Purpose: In today's life, many electronic gadgets have the potential to become invaluable health care devices in future. The gadgets in this category include smartphones, smartwatches, and others. Till now, smartphone role has been highlighted on many occasions in different areas, and they continue to possess immense role in clinical documentation, clinical consultation, and digitalization of ocular care. In last one decade, many treatable conditions including diabetic retinopathy, glaucoma, and other pediatric retinal diseases are being imaged using smartphones.Methods: To comprehend this cumulative knowledge, a detailed medical literature search was conducted on PubMed/Medline, Scopus, and Web of Science till February 2021.Results: The included literature revealed a definitive progress in posterior segment imaging. From simple torch light with smartphone examination to present day compact handy devices with artificial intelligence integrated software's have changed the very perspectives of ocular imaging in ophthalmology. The consistently reproducible results, constantly improving imaging techniques, and most importantly their affordable costs have renegotiated their role as effective screening devices in ophthalmology. Moreover, the obtained field of view, ocular safety, and their key utility in non-ophthalmic specialties are also growing.Conclusions: To conclude, smartphone imaging can now be considered as a quick, cost-effective, and digitalized tool for posterior segment screenings, however, their definite role in routine ophthalmic clinics is yet to be established.
Collapse
Affiliation(s)
- Amar Pujari
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Gunjan Saluja
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Divya Agarwal
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Ayushi Sinha
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Ananya P R
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Atul Kumar
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Namrata Sharma
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|
28
|
Tseng RMWW, Gunasekeran DV, Tan SSH, Rim TH, Lum E, Tan GSW, Wong TY, Tham YC. Considerations for Artificial Intelligence Real-World Implementation in Ophthalmology: Providers' and Patients' Perspectives. Asia Pac J Ophthalmol (Phila) 2021; 10:299-306. [PMID: 34383721 DOI: 10.1097/apo.0000000000000400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Artificial Intelligence (AI), in particular deep learning, has made waves in the health care industry, with several prominent examples shown in ophthalmology. Despite the burgeoning reports on the development of new AI algorithms for detection and management of various eye diseases, few have reached the stage of regulatory approval for real-world implementation. To better enable real-world translation of AI systems, it is important to understand the demands, needs, and concerns of both health care professionals and patients, as providers and recipients of clinical care are impacted by these solutions. This review outlines the advantages and concerns of incorporating AI in ophthalmology care delivery, from both the providers' and patients' perspectives, and the key enablers for seamless transition to real-world implementation.
Collapse
Affiliation(s)
| | - Dinesh Visva Gunasekeran
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore (NUS), Singapore
| | | | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | | | - Gavin S W Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
29
|
Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. J Clin Med 2021; 10:jcm10040844. [PMID: 33670732 PMCID: PMC7923054 DOI: 10.3390/jcm10040844] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 02/07/2021] [Accepted: 02/14/2021] [Indexed: 01/04/2023] Open
Abstract
We aimed to predict keratoconus progression and the need for corneal crosslinking (CXL) using deep learning (DL). Two hundred and seventy-four corneal tomography images taken by Pentacam HR® (Oculus, Wetzlar, Germany) of 158 keratoconus patients were examined. All patients were examined two times or more, and divided into two groups; the progression group and the non-progression group. An axial map of the frontal corneal plane, a pachymetry map, and a combination of these two maps at the initial examination were assessed according to the patients’ age. Training with a convolutional neural network on these learning data objects was conducted. Ninety eyes showed progression and 184 eyes showed no progression. The axial map, the pachymetry map, and their combination combined with patients’ age showed mean AUC values of 0.783, 0.784, and 0.814 (95% confidence interval (0.721–0.845) (0.722–0.846), and (0.755–0.872), respectively), with sensitivities of 87.8%, 77.8%, and 77.8% ((79.2–93.7), (67.8–85.9), and (67.8–85.9)) and specificities of 59.8%, 65.8%, and 69.6% ((52.3–66.9), (58.4–72.6), and (62.4–76.1)), respectively. Using the proposed DL neural network model, keratoconus progression can be predicted on corneal tomography maps combined with patients’ age.
Collapse
|
30
|
Murthy NS, Arunadevi B. An effective technique for diabetic retinopathy using hybrid machine learning technique. Stat Methods Med Res 2021; 30:1042-1056. [PMID: 33499772 DOI: 10.1177/0962280220983541] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Diabetic retinopathy (DR) stays as an eye issue that has continuously developed in individuals who experienced diabetes. The complexities in diabetes cause harm to the vein at the back of the retina. In outrageous cases, DR could swift apparition disaster or visual impairment. This genuine impact had the option to charge through convenient treatment and early recognition. As of late, this issue has been spreading quickly, particularly in the working region, which in the end constrained the interest of an analysis of this disease from the most prompt stage. Therefore, that are castoff to protect the progressions of this disorder, revealing of the retinal blood vessels (RBVs) play a foremost role. The growth of an abnormal vessel leads to the development steps of DR, where it can be well known by extracting the RBV. The recognition of the BV for DR by developing an automatic approach is a major aim of our research study. In the proposed method, there are two major steps: one is segmentation and the second one is classification of affected retinal BV. The proposed method uses the Kinetic Gas Molecule Optimization based on centroid initialization used for the Fuzzy C-means Clustering. In the classification step, those segmented images are given as input to hybrid techniques such as a convolution neural network with bidirectional-long short-term memory (CNN with Bi-LSTM). The learning degree of Bi-LSTM is revised by using the self-attention mechanism for refining the classification accuracy. The trial consequences disclosed that the mixture algorithm achieved higher accuracy, specificity, and sensitivity than existing techniques.
Collapse
Affiliation(s)
| | - B Arunadevi
- Department of Electronics and Communication Engineering, Dr.N.G.P Institute of Technology, Coimbatore, India
| |
Collapse
|
31
|
Deep multispectral image registration network. Comput Med Imaging Graph 2021; 87:101815. [PMID: 33418174 DOI: 10.1016/j.compmedimag.2020.101815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 09/27/2020] [Accepted: 10/30/2020] [Indexed: 11/24/2022]
Abstract
Multispectral imaging (MSI) of the ocular fundus provides a sequence of narrow-band images to show the different depths in the retina and choroid. One challenge in analyzing MSI images comes from the image-to-image spatial misalignment, which occurs because the acquisition time of eye MSI images is commonly longer than the natural time scale of the eye's saccadic movement. It is necessary to align images because ophthalmologists usually overlay two of the images to analyze specific features when analyzing MSI images. In this paper, we propose a weakly supervised MSI image registration network, called MSI-R-NET, for multispectral fundus image registration. Compared to other deep-learning-based registration methods, MSI-R-NET utilizes the blood vessel segmentation label to provide spatial correspondence. In addition, we employ a feature equilibrium module to connect the aggregating layers better, and propose a multiresolution auto-context structure to adapt the registration task. In the testing stage, given a new pair of MSI images, the trained model can predict the pixelwise spatial correspondence without labeled blood vessel information. The experimental results demonstrate that the proposed segmentation-driven registration method is highly accurate.
Collapse
|
32
|
Jiang Y, Pan J, Yuan M, Shen Y, Zhu J, Wang Y, Li Y, Zhang K, Yu Q, Xie H, Li H, Wang X, Luo Y. Segmentation of Laser Marks of Diabetic Retinopathy in the Fundus Photographs Using Lightweight U-Net. J Diabetes Res 2021; 2021:8766517. [PMID: 34712739 PMCID: PMC8548126 DOI: 10.1155/2021/8766517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 09/03/2021] [Accepted: 09/24/2021] [Indexed: 11/17/2022] Open
Abstract
Diabetic retinopathy (DR) is a prevalent vision-threatening disease worldwide. Laser marks are the scars left after panretinal photocoagulation, a treatment to prevent patients with severe DR from losing vision. In this study, we develop a deep learning algorithm based on the lightweight U-Net to segment laser marks from the color fundus photos, which could help indicate a stage or providing valuable auxiliary information for the care of DR patients. We prepared our training and testing data, manually annotated by trained and experienced graders from Image Reading Center, Zhongshan Ophthalmic Center, publicly available to fill the vacancy of public image datasets dedicated to the segmentation of laser marks. The lightweight U-Net, along with two postprocessing procedures, achieved an AUC of 0.9824, an optimal sensitivity of 94.16%, and an optimal specificity of 92.82% on the segmentation of laser marks in fundus photographs. With accurate segmentation and high numeric metrics, the lightweight U-Net method showed its reliable performance in automatically segmenting laser marks in fundus photographs, which could help the AI assist the diagnosis of DR in the severe stage.
Collapse
Affiliation(s)
- Yukang Jiang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Ming Yuan
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yanhe Shen
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Jin Zhu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Yishen Wang
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Yewei Li
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Ke Zhang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Qingyun Yu
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Huiting Li
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| | - Xueqin Wang
- Department of Statistical Science, School of Mathematics, Southern China Research Center of Statistical Science, Sun Yat-Sen University, Guangzhou 510275, China
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, Anhui 230026, China
- Xinhua College, Sun Yat-Sen University, Guangzhou 510520, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Image Reading Center, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou 510060, China
| |
Collapse
|
33
|
Sher I, Moverman D, Ketter-Katz H, Moisseiev E, Rotenstreich Y. In vivo retinal imaging in translational regenerative research. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:1096. [PMID: 33145315 PMCID: PMC7575995 DOI: 10.21037/atm-20-4355] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Regenerative translational studies must include a longitudinal assessment of the changes in retinal structure and function that occur as part of the natural history of the disease and those that result from the studied intervention. Traditionally, retinal structural changes have been evaluated by histological analysis which necessitates sacrificing the animals. In this review, we describe key imaging approaches such as fundus imaging, optical coherence tomography (OCT), OCT-angiography, adaptive optics (AO), and confocal scanning laser ophthalmoscopy (cSLO) that enable noninvasive, non-contact, and fast in vivo imaging of the posterior segment. These imaging technologies substantially reduce the number of animals needed and enable progression analysis and longitudinal follow-up in individual animals for accurate assessment of disease natural history, effects of interventions and acute changes. We also describe the benefits and limitations of each technology, as well as outline possible future directions that can be taken in translational retinal imaging studies.
Collapse
Affiliation(s)
- Ifat Sher
- Goldschleger Eye Institute, Sheba Medical Center, Tel-Hashomer, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Daniel Moverman
- Goldschleger Eye Institute, Sheba Medical Center, Tel-Hashomer, Israel
| | - Hadas Ketter-Katz
- Goldschleger Eye Institute, Sheba Medical Center, Tel-Hashomer, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Elad Moisseiev
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Department of Ophthalmology, Meir Medical Center, Kfar Saba, Israel
| | - Ygal Rotenstreich
- Goldschleger Eye Institute, Sheba Medical Center, Tel-Hashomer, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
34
|
Gupta V, Rajendran A, Narayanan R, Chawla S, Kumar A, Palanivelu MS, Muralidhar NS, Jayadev C, Pappuru R, Khatri M, Agarwal M, Aurora A, Bhende P, Bhende M, Bawankule P, Rishi P, Vinekar A, Trehan HS, Biswas J, Agarwal R, Natarajan S, Verma L, Ramasamy K, Giridhar A, Rishi E, Talwar D, Pathangey A, Azad R, Honavar SG. Evolving consensus on managing vitreo-retina and uvea practice in post-COVID-19 pandemic era. Indian J Ophthalmol 2020; 68:962-973. [PMID: 32461407 PMCID: PMC7508071 DOI: 10.4103/ijo.ijo_1404_20] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 05/09/2020] [Accepted: 05/09/2020] [Indexed: 02/06/2023] Open
Abstract
The COVID-19 pandemic has brought new challenges to the health care community. Many of the super-speciality practices are planning to re-open after the lockdown is lifted. However there is lot of apprehension in everyone's mind about conforming practices that would safeguard the patients, ophthalmologists, healthcare workers as well as taking adequate care of the equipment to minimize the damage. The aim of this article is to develop preferred practice patterns, by developing a consensus amongst the lead experts, that would help the institutes as well as individual vitreo-retina and uveitis experts to restart their practices with confidence. As the situation remains volatile, we would like to mention that these suggestions are evolving and likely to change as our understanding and experience gets better. Further, the suggestions are for routine patients as COVID-19 positive patients may be managed in designated hospitals as per local protocols. Also these suggestions have to be implemented keeping in compliance with local rules and regulations.
Collapse
Affiliation(s)
- Vishali Gupta
- Advanced Eye Centre, Post Graduate Institute of Medical Education and Research, Chandigarha, India
| | | | | | | | - Atul Kumar
- Dr. RP.Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Rupesh Agarwal
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore
| | | | | | | | | | | | | | | | - Rajvardhan Azad
- Regional Institute of Ophthalmology Indira Gandhi Institute of Medical Institute of Medical Sciences, Patna, India
| | | |
Collapse
|