1
|
Mikhail D, Milad D, Antaki F, Hammamji K, Qian CX, Rezende FA, Duval R. The Role of Artificial Intelligence in Epiretinal Membrane Care: A Scoping Review. OPHTHALMOLOGY SCIENCE 2025; 5:100689. [PMID: 40182981 PMCID: PMC11964620 DOI: 10.1016/j.xops.2024.100689] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 12/02/2024] [Accepted: 12/16/2024] [Indexed: 04/05/2025]
Abstract
Topic In ophthalmology, artificial intelligence (AI) demonstrates potential in using ophthalmic imaging across diverse diseases, often matching ophthalmologists' performance. However, the range of machine learning models for epiretinal membrane (ERM) management, which differ in methodology, application, and performance, remains largely unsynthesized. Clinical Relevance Epiretinal membrane management relies on clinical evaluation and imaging, with surgical intervention considered in cases of significant impairment. AI analysis of ophthalmic images and clinical features could enhance ERM detection, characterization, and prognostication, potentially improving clinical decision-making. This scoping review aims to evaluate the methodologies, applications, and reported performance of AI models in ERM diagnosis, characterization, and prognostication. Methods A comprehensive literature search was conducted across 5 electronic databases including Ovid MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science Core Collection from inception to November 14, 2024. Studies pertaining to AI algorithms in the context of ERM were included. The primary outcomes measured will be the reported design, application in ERM management, and performance of each AI model. Results Three hundred ninety articles were retrieved, with 33 studies meeting inclusion criteria. There were 30 studies (91%) reporting their training and validation methods. Altogether, 61 distinct AI models were included. OCT scans and fundus photographs were used in 26 (79%) and 7 (21%) papers, respectively. Supervised learning and both supervised and unsupervised learning were used in 32 (97%) and 1 (3%) studies, respectively. Twenty-seven studies (82%) developed or adapted AI models using images, whereas 5 (15%) had models using both images and clinical features, and 1 (3%) used preoperative and postoperative clinical features without ophthalmic images. Study objectives were categorized into 3 stages of ERM care. Twenty-three studies (70%) implemented AI for diagnosis (stage 1), 1 (3%) identified ERM characteristics (stage 2), and 6 (18%) predicted vision impairment after diagnosis or postoperative vision outcomes (stage 3). No articles studied treatment planning. Three studies (9%) used AI in stages 1 and 2. Of the 16 studies comparing AI performance to human graders (i.e., retinal specialists, general ophthalmologists, and trainees), 10 (63%) reported equivalent or higher performance. Conclusion Artificial intelligence-driven assessments of ophthalmic images and clinical features demonstrated high performance in detecting ERM, identifying its morphological properties, and predicting visual outcomes following ERM surgery. Future research might consider the validation of algorithms for clinical applications in personal treatment plan development, ideally to identify patients who might benefit most from surgery. Financial Disclosures The author(s) have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- David Mikhail
- Temerty Faculty of Medicine, University of Toronto, Toronto, Canada
- Department of Ophthalmology, University of Montreal, Montreal, Canada
| | - Daniel Milad
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Fares Antaki
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Karim Hammamji
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Cynthia X. Qian
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Flavio A. Rezende
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Renaud Duval
- Department of Ophthalmology, University of Montreal, Montreal, Canada
- Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| |
Collapse
|
2
|
Fan Y, Jiang Y, Mu Z, Xu Y, Xie P, Liu Q, Pu L, Hu Z. Optical Coherence Tomography Characteristics Between Idiopathic Epiretinal Membranes and Secondary Epiretinal Membranes due to Peripheral Retinal Hole. J Ophthalmol 2025; 2025:9299651. [PMID: 40371012 PMCID: PMC12077978 DOI: 10.1155/joph/9299651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Revised: 03/18/2025] [Accepted: 04/17/2025] [Indexed: 05/16/2025] Open
Abstract
Purpose: In clinical practice, some eyes preoperatively diagnosed with "idiopathic epiretinal membranes (iERM)" will be amended to "secondary epiretinal membranes (sERM)" once peripheral retinal hole is detected. This study utilized optical coherence tomography (OCT) images to compare the characteristics between the iERM and sERM due to peripheral retinal hole (PRH). Methods: In this retrospective, cross-sectional study, 635 eyes that had undergone pars plana vitrectomy with membrane peeling were enrolled. A total of 115 eyes (18.1%) detected with peripheral retinal holes were allocated to the sERM-PRH group while the other 520 eyes were to the iERM group. The demographic data and OCT characteristics were compared between the two groups. Besides, all the eyes were evaluated by a double-grading scheme: severity grading of ERM progression into four stages plus anatomical classification into three kinds of part-thickness macular holes associated with ERMs. Results: No significant difference was found in age, gender, symptom duration, axial length, or best-corrected visual acuity between the two groups. There was also no difference concerning the features based on OCT, ranging from central macular thickness, the ratios of the photoreceptor inner/outer segment junction line defect, intraretinal fluid, cotton ball sign, to epiretinal proliferation. However, the native difference in parafoveal thickness between the temporal and nasal quadrants was observed in the iERM group, yet disappeared in the sERM-PRH group. Moreover, eyes between the two groups were distributionally similar in both grading scales. Conclusion: Our results demonstrated that even OCT images could hardly provide effective clues for early differentiating sERM from iERM, which highlighted the necessity of a thorough pre- and intro-operative fundus examination of the peripheral retina for clinicians.
Collapse
Affiliation(s)
- Yuanyuan Fan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Yingying Jiang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
- Department of Ophthalmology, Zhangjiagang Hospital Affiliated to Soochow University, Suzhou, Jiangsu 215600, China
| | - Zhaoxia Mu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Yulian Xu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Ping Xie
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| | - Lijun Pu
- Department of Ophthalmology, Zhangjiagang Hospital Affiliated to Soochow University, Suzhou, Jiangsu 215600, China
| | - Zizhong Hu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu 210029, China
| |
Collapse
|
3
|
Mariotti C, Mangoni L, Muzi A, Fella M, Mogetta V, Bongiovanni G, Rizzo C, Chhablani J, Midena E, Lupidi M. Artificial intelligence-based assessment of imaging biomarkers in epiretinal membrane surgery. Eur J Ophthalmol 2025:11206721251337139. [PMID: 40289523 DOI: 10.1177/11206721251337139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/30/2025]
Abstract
PurposeThis study investigated the applicability of a validated AI-algorithm for analyzing different retinal biomarkers in eyes affected by epiretinal membranes (ERMs) before and after surgery.MethodsA retrospective study included 40 patients surgically treated for ERMs removal between November 2022 and January 2024. Pars plana vitrectomy with ERM/ILM peeling was performed by a single experienced surgeon. A validated AI algorithm was used to analyze OCT scans, focusing on intraretinal fluid (IRF) and subretinal fluid (SRF) volumes, external limiting membrane (ELM) and ellipsoid zone (EZ) interruption percentages and hyper-reflective foci (HRF) counts.ResultsPostoperative best corrected visual acuity (BCVA) significantly improved (p < 0.01), and central macular thickness (CMT) decreased from 483.61 ± 96.32 to 386.82 ± 94.86 µm (p = 0.001). IRF volume reduced from 0.283 ± 0.39 mm3 to 0.142 ± 0.27 mm3 (p = 0.036) particularly in the central 1 mm-circle. SRF, HRF and EZ/ELM interruption percentages exhibited no significant differences (p > 0.05). Significant correlations (p < 0.05) were found between preoperative BCVA and postoperative BCVA (r = 0.45); CMT reduction and postoperative BCVA (r = 0.42), preoperative IRF and Visual Recovery (r = -0.48), ELM and EZ interruption and visual recovery (r = -0.43 and r = -0.47 respectively). Multivariate analysis demonstrated that fluid distribution, especially in the central subfield, correlated with BCVA recovery (R2 = 0.38; p < 0.05; Pillai's Trace = 0.79).ConclusionThe study highlights AI's potential in quantifying OCT biomarkers in ERMs surgery. The findings suggest that improved BCVA is associated with reduced CMT, IRF, and redistribution of IRF towards the periphery. EZ and ELM integrities remain crucial prognostic factors, emphasizing the importance of the preoperative analysis.
Collapse
Affiliation(s)
- Cesare Mariotti
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Lorenzo Mangoni
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Alessio Muzi
- Department of Ophthalmology, Humanitas Gradenigo, Turin, Italy
| | - Michele Fella
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Veronica Mogetta
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Giacomo Bongiovanni
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
| | - Clara Rizzo
- Ophthalmic Unit, Department of Neurosciences, Biomedicine, and Movement Sciences, University of Verona, Verona, Italy
| | - Jay Chhablani
- Department of Ophthalmology, UPMC Eye Center, University of Pittsburgh, Pittsburgh, USA
| | - Edoardo Midena
- Department of Ophthalmology, University of Padova, Padova, Italy
- IRCCS - Fondazione Bietti, Rome, Italy
| | - Marco Lupidi
- Eye Clinic, Department of Experimental and Clinical Medicine, Polytechnic University of Marche, Ancona, Italy
- Fondazione per la Macula Onlus, Dipartimento di Neuroscienze, Riabilitazione, Oftalmologia, Genetica e Scienze Materno-Infantili (DINOGMI), University Eye Clinic, Genova, Italy
| |
Collapse
|
4
|
Akpinar MH, Sengur A, Faust O, Tong L, Molinari F, Acharya UR. Artificial intelligence in retinal screening using OCT images: A review of the last decade (2013-2023). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108253. [PMID: 38861878 DOI: 10.1016/j.cmpb.2024.108253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/22/2024] [Accepted: 05/25/2024] [Indexed: 06/13/2024]
Abstract
BACKGROUND AND OBJECTIVES Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.
Collapse
Affiliation(s)
- Muhammed Halil Akpinar
- Department of Electronics and Automation, Vocational School of Technical Sciences, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Abdulkadir Sengur
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Louis Tong
- Singapore Eye Research Institute, Singapore, Singapore
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
5
|
Chatzimichail E, Feltgen N, Motta L, Empeslidis T, Konstas AG, Gatzioufas Z, Panos GD. Transforming the future of ophthalmology: artificial intelligence and robotics' breakthrough role in surgical and medical retina advances: a mini review. Front Med (Lausanne) 2024; 11:1434241. [PMID: 39076760 PMCID: PMC11284058 DOI: 10.3389/fmed.2024.1434241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 06/26/2024] [Indexed: 07/31/2024] Open
Abstract
Over the past decade, artificial intelligence (AI) and its subfields, deep learning and machine learning, have become integral parts of ophthalmology, particularly in the field of ophthalmic imaging. A diverse array of algorithms has emerged to facilitate the automated diagnosis of numerous medical and surgical retinal conditions. The development of these algorithms necessitates extensive training using large datasets of retinal images. This approach has demonstrated a promising impact, especially in increasing accuracy of diagnosis for unspecialized clinicians for various diseases and in the area of telemedicine, where access to ophthalmological care is restricted. In parallel, robotic technology has made significant inroads into the medical field, including ophthalmology. The vast majority of research in the field of robotic surgery has been focused on anterior segment and vitreoretinal surgery. These systems offer potential improvements in accuracy and address issues such as hand tremors. However, widespread adoption faces hurdles, including the substantial costs associated with these systems and the steep learning curve for surgeons. These challenges currently constrain the broader implementation of robotic surgical systems in ophthalmology. This mini review discusses the current research and challenges, underscoring the limited yet growing implementation of AI and robotic systems in the field of retinal conditions.
Collapse
Affiliation(s)
| | - Nicolas Feltgen
- Department of Ophthalmology, University Hospital of Basel, Basel, Switzerland
| | - Lorenzo Motta
- Department of Ophthalmology, School of Medicine, University of Padova, Padua, Italy
| | | | - Anastasios G. Konstas
- Department of Ophthalmology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Zisis Gatzioufas
- Department of Ophthalmology, University Hospital of Basel, Basel, Switzerland
| | - Georgios D. Panos
- Department of Ophthalmology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
- Department of Ophthalmology, Queen’s Medical Centre, Nottingham University Hospitals, Nottingham, United Kingdom
- Division of Ophthalmology and Visual Sciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
6
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
7
|
Ayhan MS, Neubauer J, Uzel MM, Gelisken F, Berens P. Interpretable detection of epiretinal membrane from optical coherence tomography with deep neural networks. Sci Rep 2024; 14:8484. [PMID: 38605115 PMCID: PMC11009346 DOI: 10.1038/s41598-024-57798-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
This study aimed to automatically detect epiretinal membranes (ERM) in various OCT-scans of the central and paracentral macula region and classify them by size using deep-neural-networks (DNNs). To this end, 11,061 OCT-images were included and graded according to the presence of an ERM and its size (small 100-1000 µm, large > 1000 µm). The data set was divided into training, validation and test sets (75%, 10%, 15% of the data, respectively). An ensemble of DNNs was trained and saliency maps were generated using Guided-Backprob. OCT-scans were also transformed into a one-dimensional-value using t-SNE analysis. The DNNs' receiver-operating-characteristics on the test set showed a high performance for no-ERM, small-ERM and large-ERM cases (AUC: 0.99, 0.92, 0.99, respectively; 3-way accuracy: 89%), with small-ERMs being the most difficult ones to detect. t-SNE analysis sorted cases by size and, in particular, revealed increased classification uncertainty at the transitions between groups. Saliency maps reliably highlighted ERM, regardless of the presence of other OCT features (i.e. retinal-thickening, intraretinal pseudo-cysts, epiretinal-proliferation) and entities such as ERM-retinoschisis, macular-pseudohole and lamellar-macular-hole. This study showed therefore that DNNs can reliably detect and grade ERMs according to their size not only in the fovea but also in the paracentral region. This is also achieved in cases of hard-to-detect, small-ERMs. In addition, the generated saliency maps can be used to highlight small-ERMs that might otherwise be missed. The proposed model could be used for screening-programs or decision-support-systems in the future.
Collapse
Affiliation(s)
- Murat Seçkin Ayhan
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany
| | - Jonas Neubauer
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Mehmet Murat Uzel
- University Eye Clinic, University of Tübingen, Tübingen, Germany
- Department of Ophthalmology, Balıkesir University School of Medicine, Balıkesir, Turkey
| | - Faik Gelisken
- University Eye Clinic, University of Tübingen, Tübingen, Germany.
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany.
- Tübingen AI Center, Tübingen, Germany.
| |
Collapse
|
8
|
Yan Y, Huang X, Jiang X, Gao Z, Liu X, Jin K, Ye J. Clinical evaluation of deep learning systems for assisting in the diagnosis of the epiretinal membrane grade in general ophthalmologists. Eye (Lond) 2024; 38:730-736. [PMID: 37848677 PMCID: PMC10920879 DOI: 10.1038/s41433-023-02765-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 08/28/2023] [Accepted: 09/18/2023] [Indexed: 10/19/2023] Open
Abstract
BACKGROUND Epiretinal membrane (ERM) is a common age-related retinal disease detected by optical coherence tomography (OCT), with a prevalence of 34.1% among people over 60 years old. This study aims to develop artificial intelligence (AI) systems to assist in the diagnosis of ERM grade using OCT images and to clinically evaluate the potential benefits and risks of our AI systems with a comparative experiment. METHODS A segmentation deep learning (DL) model that segments retinal features associated with ERM severity and a classification DL model that grades the severity of ERM were developed based on an OCT dataset obtained from three hospitals. A comparative experiment was conducted to compare the performance of four general ophthalmologists with and without assistance from the AI in diagnosing ERM severity. RESULTS The segmentation network had a pixel accuracy (PA) of 0.980 and a mean intersection over union (MIoU) of 0.873, while the six-classification network had a total accuracy of 81.3%. The diagnostic accuracy scores of the four ophthalmologists increased with AI assistance from 81.7%, 80.7%, 78.0%, and 80.7% to 87.7%, 86.7%, 89.0%, and 91.3%, respectively, while the corresponding time expenditures were reduced. The specific results of the study as well as the misinterpretations of the AI systems were analysed. CONCLUSION Through our comparative experiment, the AI systems proved to be valuable references for medical diagnosis and demonstrated the potential to accelerate clinical workflows. Systematic efforts are needed to ensure the safe and rapid integration of AI systems into ophthalmic practice.
Collapse
Affiliation(s)
- Yan Yan
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xiaoling Huang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zhiyuan Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Xindi Liu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, 310009, China.
| |
Collapse
|
9
|
Tang QQ, Yang XG, Wang HQ, Wu DW, Zhang MX. Applications of deep learning for detecting ophthalmic diseases with ultrawide-field fundus images. Int J Ophthalmol 2024; 17:188-200. [PMID: 38239939 PMCID: PMC10754665 DOI: 10.18240/ijo.2024.01.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 11/07/2023] [Indexed: 01/22/2024] Open
Abstract
AIM To summarize the application of deep learning in detecting ophthalmic disease with ultrawide-field fundus images and analyze the advantages, limitations, and possible solutions common to all tasks. METHODS We searched three academic databases, including PubMed, Web of Science, and Ovid, with the date of August 2022. We matched and screened according to the target keywords and publication year and retrieved a total of 4358 research papers according to the keywords, of which 23 studies were retrieved on applying deep learning in diagnosing ophthalmic disease with ultrawide-field images. RESULTS Deep learning in ultrawide-field images can detect various ophthalmic diseases and achieve great performance, including diabetic retinopathy, glaucoma, age-related macular degeneration, retinal vein occlusions, retinal detachment, and other peripheral retinal diseases. Compared to fundus images, the ultrawide-field fundus scanning laser ophthalmoscopy enables the capture of the ocular fundus up to 200° in a single exposure, which can observe more areas of the retina. CONCLUSION The combination of ultrawide-field fundus images and artificial intelligence will achieve great performance in diagnosing multiple ophthalmic diseases in the future.
Collapse
Affiliation(s)
- Qing-Qing Tang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Xiang-Gang Yang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Hong-Qiu Wang
- Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511400, Guangdong Province, China
| | - Da-Wen Wu
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Mei-Xia Zhang
- Department of Ophthalmology and Research Laboratory of Macular Disease, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
10
|
Leandro I, Lorenzo B, Aleksandar M, Dario M, Rosa G, Agostino A, Daniele T. OCT-based deep-learning models for the identification of retinal key signs. Sci Rep 2023; 13:14628. [PMID: 37670066 PMCID: PMC10480174 DOI: 10.1038/s41598-023-41362-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023] Open
Abstract
A new system based on binary Deep Learning (DL) convolutional neural networks has been developed to recognize specific retinal abnormality signs on Optical Coherence Tomography (OCT) images useful for clinical practice. Images from the local hospital database were retrospectively selected from 2017 to 2022. Images were labeled by two retinal specialists and included central fovea cross-section OCTs. Nine models were developed using the Visual Geometry Group 16 architecture to distinguish healthy versus abnormal retinas and to identify eight different retinal abnormality signs. A total of 21,500 OCT images were screened, and 10,770 central fovea cross-section OCTs were included in the study. The system achieved high accuracy in identifying healthy retinas and specific pathological signs, ranging from 93 to 99%. Accurately detecting abnormal retinal signs from OCT images is crucial for patient care. This study aimed to identify specific signs related to retinal pathologies, aiding ophthalmologists in diagnosis. The high-accuracy system identified healthy retinas and pathological signs, making it a useful diagnostic aid. Labelled OCT images remain a challenge, but our approach reduces dataset creation time and shows DL models' potential to improve ocular pathology diagnosis and clinical decision-making.
Collapse
Affiliation(s)
- Inferrera Leandro
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy.
| | - Borsatti Lorenzo
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | | | - Marangoni Dario
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Giglio Rosa
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Accardo Agostino
- Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Tognetto Daniele
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| |
Collapse
|
11
|
Jeong JH, Kang KT, Lee YH, Kim YC. Correlation between Severity of Idiopathic Epiretinal Membrane and Irvine-Gass Syndrome. J Pers Med 2023; 13:1341. [PMID: 37763108 PMCID: PMC10532645 DOI: 10.3390/jpm13091341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/21/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023] Open
Abstract
A higher risk of pseudophakic cystoid macular edema (PCME) has been reported in patients with preoperative idiopathic epiretinal membrane (ERM); however, whether the formation of PCME depends on the grade of ERM has not been well established. We conducted a retrospective case-control study of 87 eyes of 78 patients who were preoperatively diagnosed with idiopathic ERM and had undergone cataract surgery. Patients were divided into two groups: PCME and non-PCME groups. After cataract surgery, the ERM status was graded using the Gass and Govetto classifications. Both the central macular thickness (CMT) and ERM grade increased after surgery, and higher preoperative CMT and ERM grades were found in the PCME group. The association between higher-grade ERM and the development of PCME was significant in the Govetto classification (grade 2, odds ratio (OR): 3.13; grade 3, OR: 3.93; and grade 4, OR: 16.07). The study results indicate that close attention should be given to patients with ERM with the presence of an ectopic inner foveal layer before cataract surgery.
Collapse
Affiliation(s)
| | | | | | - Yu Cheol Kim
- Department of Ophthalmology, School of Medicine, Keimyung University, Daegu 42601, Republic of Korea; (J.H.J.); (K.T.K.); (Y.H.L.)
| |
Collapse
|
12
|
Chun JW, Kim HS. The Present and Future of Artificial Intelligence-Based Medical Image in Diabetes Mellitus: Focus on Analytical Methods and Limitations of Clinical Use. J Korean Med Sci 2023; 38:e253. [PMID: 37550811 PMCID: PMC10412032 DOI: 10.3346/jkms.2023.38.e253] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 07/12/2023] [Indexed: 08/09/2023] Open
Abstract
Artificial intelligence (AI)-based diagnostic technology using medical images can be used to increase examination accessibility and support clinical decision-making for screening and diagnosis. To determine a machine learning algorithm for diabetes complications, a literature review of studies using medical image-based AI technology was conducted using the National Library of Medicine PubMed, and the Excerpta Medica databases. Lists of studies using diabetes diagnostic images and AI as keywords were combined. In total, 227 appropriate studies were selected. Diabetic retinopathy studies using the AI model were the most frequent (85.0%, 193/227 cases), followed by diabetic foot (7.9%, 18/227 cases) and diabetic neuropathy (2.7%, 6/227 cases). The studies used open datasets (42.3%, 96/227 cases) or directly constructed data from fundoscopy or optical coherence tomography (57.7%, 131/227 cases). Major limitations in AI-based detection of diabetes complications using medical images were the lack of datasets (36.1%, 82/227 cases) and severity misclassification (26.4%, 60/227 cases). Although it remains difficult to use and fully trust AI-based imaging analysis technology clinically, it reduces clinicians' time and labor, and the expectations from its decision-support roles are high. Various data collection and synthesis data technology developments according to the disease severity are required to solve data imbalance.
Collapse
Affiliation(s)
- Ji-Won Chun
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Hun-Sung Kim
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|
13
|
Hung CL, Lin KH, Lee YK, Mrozek D, Tsai YT, Lin CH. The classification of stages of epiretinal membrane using convolutional neural network on optical coherence tomography image. Methods 2023; 214:28-34. [PMID: 37116670 DOI: 10.1016/j.ymeth.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 03/18/2023] [Accepted: 04/22/2023] [Indexed: 04/30/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The gold standard for diagnosing epiretinal membranes is to observe the surface of the internal limiting membrane on optical coherence tomography images. The stages of the epiretinal membrane are used to decide the condition of the health of the membrane. The stages are not detected because some of them are similar. To accurately classify the stages, a deep-learning technology can be used to improve the classification accuracy. METHODS A combinatorial fusion with multiple convolutional neural networks (CNN) algorithms are proposed to enhance the accuracy of a single image classification model. The proposed method was trained using a dataset of 1947 optical coherence tomography images diagnosed with the epiretinal membrane at the Taichung Veterans General Hospital in Taiwan. The images consisted of 4 stages; stages 1, 2, 3, and 4. RESULTS The overall accuracy of the classification was 84%. The combination of five and six CNN models achieves the highest testing accuracy (85%) among other combinations, respectively. Any combination with a different number of CNN models outperforms any single CNN algorithm working alone. Meanwhile, the accuracy of the proposed method is better than ophthalmologists with years of clinical experience. CONCLUSIONS We have developed an efficient epiretinal membrane classification method by using combinatorial fusion with CNN models on optical coherence tomography images. The proposed method can be used for screening purposes to facilitate ophthalmologists making the correct diagnoses in general medical practice.
Collapse
Affiliation(s)
- Che-Lun Hung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taiwan, ROC; Computer Science and Communication Engineering, Providence University, Taiwan, ROC.
| | - Keng-Hung Lin
- Department of Ophthalmology, Taichung Veterans General Hospital, Taiwan, ROC.
| | - Yu-Kai Lee
- Department of Computer Science and Information Engineering, Providence University, Taiwan, ROC.
| | - Dariusz Mrozek
- Department of Applied Informatics, Silesian University of Technology, Poland.
| | - Yin-Te Tsai
- Computer Science and Communication Engineering, Providence University, Taiwan, ROC.
| | - Chun-Hsien Lin
- Department of Ophthalmology, Taichung Veterans General Hospital, Taiwan, ROC.
| |
Collapse
|
14
|
Yeh TC, Chen SJ, Chou YB, Luo AC, Deng YS, Lee YH, Chang PH, Lin CJ, Tai MC, Chen YC, Ko YC. PREDICTING VISUAL OUTCOME AFTER SURGERY IN PATIENTS WITH IDIOPATHIC EPIRETINAL MEMBRANE USING A NOVEL CONVOLUTIONAL NEURAL NETWORK. Retina 2023; 43:767-774. [PMID: 36727822 DOI: 10.1097/iae.0000000000003714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
PURPOSE To develop a deep convolutional neural network that enables the prediction of postoperative visual outcomes after epiretinal membrane surgery based on preoperative optical coherence tomography images and clinical parameters to refine surgical decision making. METHODS A total of 529 patients with idiopathic epiretinal membrane who underwent standard vitrectomy with epiretinal membrane peeling surgery by two surgeons between January 1, 2014, and June 1, 2020, were enrolled. The newly developed Heterogeneous Data Fusion Net was introduced to predict postoperative visual acuity outcomes (improvement ≥2 lines in Snellen chart) 12 months after surgery based on preoperative cross-sectional optical coherence tomography images and clinical factors, including age, sex, and preoperative visual acuity. The predictive accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve of the convolutional neural network model were evaluated. RESULTS The developed model demonstrated an overall accuracy for visual outcome prediction of 88.68% (95% CI, 79.0%-95.7%) with an area under the receiver operating characteristic curve of 97.8% (95% CI, 86.8%-98.0%), sensitivity of 87.0% (95% CI, 67.9%-95.5%), specificity of 92.9% (95% CI, 77.4%-98.0%), precision of 0.909, recall of 0.870, and F1 score of 0.889. The heatmaps identified the critical area for prediction as the ellipsoid zone of photoreceptors and the superficial retina, which was subjected to tangential traction of the proliferative membrane. CONCLUSION The novel Heterogeneous Data Fusion Net demonstrated high accuracy in the automated prediction of visual outcomes after weighing and leveraging multiple clinical parameters, including optical coherence tomography images. This approach may be helpful in establishing personalized therapeutic strategies for epiretinal membrane management.
Collapse
Affiliation(s)
- Tsai-Chu Yeh
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - An-Chun Luo
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Yu-Shan Deng
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Yu-Hsien Lee
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Po-Han Chang
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Chun-Ju Lin
- Industrial Technology Research Institute, Taipei City, Taiwan
| | - Ming-Chi Tai
- Industrial Technology Research Institute, Taipei City, Taiwan
- Department of Materials Science and Engineering, National Tsing-Hua University, Taipei City, Taiwan; and
| | - Ying-Chi Chen
- Division of Computer Science and Engineering, University of Michigan, Ann Arbor, Michigan
| | - Yu-Chieh Ko
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei City, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| |
Collapse
|
15
|
Kayadibi İ, Güraksın GE. An Explainable Fully Dense Fusion Neural Network with Deep Support Vector Machine for Retinal Disease Determination. INT J COMPUT INT SYS 2023. [DOI: 10.1007/s44196-023-00210-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023] Open
Abstract
AbstractRetinal issues are crucial because they result in visual loss. Early diagnosis can aid physicians in initiating treatment and preventing visual loss. Optical coherence tomography (OCT), which portrays retinal morphology cross-sectionally and noninvasively, is used to identify retinal abnormalities. The process of analyzing OCT images, on the other hand, takes time. This study has proposed a hybrid approach based on a fully dense fusion neural network (FD-CNN) and dual preprocessing to identify retinal diseases, such as choroidal neovascularization, diabetic macular edema, drusen from OCT images. A dual preprocessing methodology, in other words, a hybrid speckle reduction filter was initially used to diminish speckle noise present in OCT images. Secondly, the FD-CNN architecture was trained, and the features obtained from this architecture were extracted. Then Deep Support Vector Machine (D-SVM) and Deep K-Nearest Neighbor (D-KNN) classifiers were proposed to reclassify those features and tested on University of California San Diego (UCSD) and Duke OCT datasets. D-SVM demonstrated the best performance in both datasets. D-SVM achieved 99.60% accuracy, 99.60% sensitivity, 99.87% specificity, 99.60% precision and 99.60% F1 score in the UCSD dataset. It achieved 97.50% accuracy, 97.64% sensitivity, 98.91% specificity, 96.61% precision, and 97.03% F1 score in Duke dataset. Additionally, the results were compared to state-of-the-art works on the both datasets. The D-SVM was demonstrated to be an efficient and productive strategy for improving the robustness of automatic retinal disease classification. Also, in this study, it is shown that the unboxing of how AI systems' black-box choices is made by generating heat maps using the local interpretable model-agnostic explanation method, which is an explainable artificial intelligence (XAI) technique. Heat maps, in particular, may contribute to the development of more stable deep learning-based systems, as well as enhancing the confidence in the diagnosis of retinal disease in the analysis of OCT image for ophthalmologists.
Collapse
|
16
|
NEW ARTIFICIAL INTELLIGENCE ANALYSIS FOR PREDICTION OF LONG-TERM VISUAL IMPROVEMENT AFTER EPIRETINAL MEMBRANE SURGERY. Retina 2023; 43:173-181. [PMID: 36228144 DOI: 10.1097/iae.0000000000003646] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 09/20/2022] [Indexed: 02/01/2023]
Abstract
PURPOSE To predict improvement of best-corrected visual acuity (BCVA) 1 year after pars plana vitrectomy for epiretinal membrane (ERM) using artificial intelligence methods on optical coherence tomography B-scan images. METHODS Four hundred and eleven (411) patients with Stage II ERM were divided in a group improvement (IM) (≥15 ETDRS letters of VA recovery) and a group no improvement (N-IM) (<15 letters) according to 1-year VA improvement after 25-G pars plana vitrectomy with internal limiting membrane peeling. Primary outcome was the creation of a deep learning classifier (DLC) based on optical coherence tomography B-scan images for prediction. Secondary outcome was assessment of the influence of various clinical and imaging predictors on BCVA improvement. Inception-ResNet-V2 was trained using standard augmentation techniques. Testing was performed on an external data set. For secondary outcome, B-scan acquisitions were analyzed by graders both before and after fibrillary change processing enhancement. RESULTS The overall performance of the DLC showed a sensitivity of 87.3% and a specificity of 86.2%. Regression analysis showed a difference in preoperative images prevalence of ectopic inner foveal layer, foveal detachment, ellipsoid zone interruption, cotton wool sign, unprocessed fibrillary changes (odds ratio = 2.75 [confidence interval: 2.49-2.96]), and processed fibrillary changes (odds ratio = 5.42 [confidence interval: 4.81-6.08]), whereas preoperative BCVA and central macular thickness did not differ between groups. CONCLUSION The DLC showed high performances in predicting 1-year visual outcome in ERM surgery patients. Fibrillary changes should also be considered as relevant predictors.
Collapse
|
17
|
Jin K, Yan Y, Wang S, Yang C, Chen M, Liu X, Terasaki H, Yeo TH, Singh NG, Wang Y, Ye J. iERM: An Interpretable Deep Learning System to Classify Epiretinal Membrane for Different Optical Coherence Tomography Devices: A Multi-Center Analysis. J Clin Med 2023; 12:400. [PMID: 36675327 PMCID: PMC9862104 DOI: 10.3390/jcm12020400] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/29/2022] [Accepted: 01/03/2023] [Indexed: 01/06/2023] Open
Abstract
Background: Epiretinal membranes (ERM) have been found to be common among individuals >50 years old. However, the severity grading assessment for ERM based on optical coherence tomography (OCT) images has remained a challenge due to lacking reliable and interpretable analysis methods. Thus, this study aimed to develop a two-stage deep learning (DL) system named iERM to provide accurate automatic grading of ERM for clinical practice. Methods: The iERM was trained based on human segmentation of key features to improve classification performance and simultaneously provide interpretability to the classification results. We developed and tested iERM using a total of 4547 OCT B-Scans of four different commercial OCT devices that were collected from nine international medical centers. Results: As per the results, the integrated network effectively improved the grading performance by 1−5.9% compared with the traditional classification DL model and achieved high accuracy scores of 82.9%, 87.0%, and 79.4% in the internal test dataset and two external test datasets, respectively. This is comparable to retinal specialists whose average accuracy scores are 87.8% and 79.4% in two external test datasets. Conclusion: This study proved to be a benchmark method to improve the performance and enhance the interpretability of the traditional DL model with the implementation of segmentation based on prior human knowledge. It may have the potential to provide precise guidance for ERM diagnosis and treatment.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou 310009, China
| | - Yan Yan
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou 310009, China
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Ce Yang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China
| | - Menglu Chen
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou 310009, China
| | - Xindi Liu
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou 310009, China
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima 890-8520, Japan
| | - Tun-Hang Yeo
- Ophthalmology and Visual Sciences, Khoo Teck Puat Hospital, National Healthcare Group, Singapore 768828, Singapore
| | - Neha Gulab Singh
- Ophthalmology and Visual Sciences, Khoo Teck Puat Hospital, National Healthcare Group, Singapore 768828, Singapore
| | - Yao Wang
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou 310009, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou 310009, China
| |
Collapse
|
18
|
Hsia Y, Lin YY, Wang BS, Su CY, Lai YH, Hsieh YT. Prediction of Visual Impairment in Epiretinal Membrane and Feature Analysis: A Deep Learning Approach Using Optical Coherence Tomography. Asia Pac J Ophthalmol (Phila) 2023; 12:21-28. [PMID: 36706331 DOI: 10.1097/apo.0000000000000576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 09/14/2022] [Indexed: 01/28/2023] Open
Abstract
PURPOSE The aim was to develop a deep learning model for predicting the extent of visual impairment in epiretinal membrane (ERM) using optical coherence tomography (OCT) images, and to analyze the associated features. METHODS Six hundred macular OCT images from eyes with ERM and no visually significant media opacity or other retinal diseases were obtained. Those with best-corrected visual acuity ≤20/50 were classified as "profound visual impairment," while those with best-corrected visual acuity >20/50 were classified as "less visual impairment." Ninety percent of images were used as the training data set and 10% were used for testing. Two convolutional neural network models (ResNet-50 and ResNet-18) were adopted for training. The t-distributed stochastic neighbor-embedding approach was used to compare their performances. The Grad-CAM technique was used in the heat map generative phase for feature analysis. RESULTS During the model development, the training accuracy was 100% in both convolutional neural network models, while the testing accuracy was 70% and 80% for ResNet-18 and ResNet-50, respectively. The t-distributed stochastic neighbor-embedding approach found that the deeper structure (ResNet-50) had better discrimination on OCT characteristics for visual impairment than the shallower structure (ResNet-18). The heat maps indicated that the key features for visual impairment were located mostly in the inner retinal layers of the fovea and parafoveal regions. CONCLUSIONS Deep learning algorithms could assess the extent of visual impairment from OCT images in patients with ERM. Changes in inner retinal layers were found to have a greater impact on visual acuity than the outer retinal changes.
Collapse
Affiliation(s)
- Yun Hsia
- National Taiwan University Biomedical Park Hospital, Hsin-Chu
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| | - Yu-Yi Lin
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Bo-Sin Wang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chung-Yen Su
- Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Medical Device Innovation & Translation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yi-Ting Hsieh
- Department of Ophthalmology, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
19
|
Short-Term In Vitro ROS Detection and Oxidative Stress Regulators in Epiretinal Membranes and Vitreous from Idiopathic Vitreoretinal Diseases. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7497816. [PMID: 36567907 PMCID: PMC9788888 DOI: 10.1155/2022/7497816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/27/2022] [Accepted: 11/03/2022] [Indexed: 12/23/2022]
Abstract
Background A plethora of inflammatory, angiogenic, and tissue remodeling factors has been reported in idiopathic epiretinal membranes (ERMs). Herein we focused on the expression of a few mediators (oxidative, inflammatory, and angiogenic/vascular factors) by means of short-term vitreal cell cultures and biomolecular analysis. Methods Thirty-nine (39) ERMs and vitreal samples were collected at the time of vitreoretinal surgery and biomolecular analyses were performed in clear vitreous, vitreal cell pellets, and ERMs. ROS products and iNOS were investigated in adherent vitreal cells and/or ERMs, and iNOS, VEGF, Ang-2, IFNγ, IL18, and IL22 were quantified in vitreous (ELISA/Ella, IF/WB); transcripts specific for iNOS, p65NFkB, KEAP1, NRF2, and NOX1/NOX4 were detected in ERMs (PCR). Biomolecular changes were analyzed and correlated with disease severity. Results The higher ROS production was observed in vitreal cells at stage 4, and iNOS was found in ERMs and increased in the vitreous as early as at stage 3. Both iNOS and NOX4 were upregulated at all stages, while p65NFkB was increased at stage 3. iNOS and NOX1 were positively and inversely related with p65NFkB. While NOX4 transcripts were always upregulated, NRF2 was upregulated at stage 3 and inverted at stage 4. No significant changes occurred in the release of angiogenic (VEGF, Ang-2) and proinflammatory (IL18, IL22 and IFNγ) mediators between all stages investigated. Conclusions ROS production was strictly associated with iNOS and NOX4 overexpression and increased depending on ERM stadiation. The higher iNOS expression occurred as early as stage 3, with respect to p65NFkB and NRF2. These last mediators might have potential prognostic values in ERMs as representative of an underneath retinal damage.
Collapse
|
20
|
Bai J, Wan Z, Li P, Chen L, Wang J, Fan Y, Chen X, Peng Q, Gao P. Accuracy and feasibility with AI-assisted OCT in retinal disorder community screening. Front Cell Dev Biol 2022; 10:1053483. [PMID: 36407116 PMCID: PMC9670537 DOI: 10.3389/fcell.2022.1053483] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 10/18/2022] [Indexed: 10/31/2023] Open
Abstract
Objective: To evaluate the accuracy and feasibility of the auto-detection of 15 retinal disorders with artificial intelligence (AI)-assisted optical coherence tomography (OCT) in community screening. Methods: A total of 954 eyes of 477 subjects from four local communities were enrolled in this study from September to December 2021. They received OCT scans covering an area of 12 mm × 9 mm at the posterior pole retina involving the macular and optic disc, as well as other ophthalmic examinations performed using their demographic information recorded. The OCT images were analyzed using integrated software with the previously established algorithm based on the deep-learning method and trained to detect 15 kinds of retinal disorders, namely, pigment epithelial detachment (PED), posterior vitreous detachment (PVD), epiretinal membranes (ERMs), sub-retinal fluid (SRF), choroidal neovascularization (CNV), drusen, retinoschisis, cystoid macular edema (CME), exudation, macular hole (MH), retinal detachment (RD), ellipsoid zone disruption, focal choroidal excavation (FCE), choroid atrophy, and retinal hemorrhage. Meanwhile, the diagnosis was also generated from three groups of individual ophthalmologists (group of retina specialists, senior ophthalmologists, and junior ophthalmologists) and compared with those by the AI. The area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were calculated, and kappa statistics were performed. Results: A total of 878 eyes were finally enrolled, with 76 excluded due to poor image quality. In the detection of 15 retinal disorders, the ROC curve comparison between AI and professors' presented relatively large AUC (0.891-0.997), high sensitivity (87.65-100%), and high specificity (80.12-99.41%). Among the ROC curve comparisons with those by the retina specialists, AI was the closest one to the professors' compared to senior and junior ophthalmologists (p < 0.05). Conclusion: AI-assisted OCT is highly accurate, sensitive, and specific in auto-detection of 15 kinds of retinal disorders, certifying its feasibility and effectiveness in community ophthalmic screening.
Collapse
Affiliation(s)
- Jianhao Bai
- Department of Ophthalmology, Shanghai Tenth People’s Hospital of Tongji University, Tongji University School of Medicine, Shanghai, China
| | - Zhongqi Wan
- Department of Ophthalmology, Shanghai Tenth People’s Hospital of Tongji University, Tongji University School of Medicine, Shanghai, China
| | - Ping Li
- Department of Ophthalmology, Shanghai Tenth People’s Hospital of Tongji University, Tongji University School of Medicine, Shanghai, China
| | - Lei Chen
- Department of Ophthalmology, Shanghai Tenth People’s Hospital of Tongji University, Tongji University School of Medicine, Shanghai, China
| | - Jingcheng Wang
- Suzhou Big Vision Medical Technology Co Ltd, Suzhou, China
| | - Yu Fan
- Suzhou Big Vision Medical Technology Co Ltd, Suzhou, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Suzhou, China
| | - Qing Peng
- Department of Ophthalmology, Shanghai Tenth People’s Hospital of Tongji University, Tongji University School of Medicine, Shanghai, China
| | - Peng Gao
- Department of Ophthalmology, Shanghai Tenth People’s Hospital of Tongji University, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
21
|
Russell MW, Muste JC, Rachitskaya AV, Talcott KE, Singh RP, Mammo DA. Visual, Anatomic Outcomes, and Natural History of Retinal Nerve Fiber Layer Schisis in Patients Undergoing Epiretinal Membrane Surgery. Ophthalmol Retina 2022; 7:325-332. [PMID: 36280203 DOI: 10.1016/j.oret.2022.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/10/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE To evaluate the anatomic and visual outcomes of patients with idiopathic epiretinal membranes (ERMs) complicated by schisis of the retinal nerve fiber layer (sRNFL) in routine clinical practice. DESIGN Retrospective case-control study. PARTICIPANTS Patients undergoing idiopathic ERM surgery at Cole Eye Institute from 2013 to 2021. METHODS Patients were grouped by the presence or absence of sRNFL before surgery. Preoperative and postoperative data were collected regarding visual acuity (VA), changes in central subfield thickness (CST) over time, and presence of cystoid macular edema. MAIN OUTCOME MEASURES Frequency of sRNFL in patients undergoing idiopathic ERM surgery. RESULTS Overall, 48 (53.9%) of 89 patients presented with sRNFL. Schisis of the retinal nerve fiber layer patients presented with significantly decreased VA compared with those without (58.63 ± 12.48 vs. 67.68 ± 7.84 ETDRS letters, P < 0.001, respectively). At the final follow-up after ERM removal, there was no significant difference in final VA in patients with sRNFL compared with those without (71.16 ± 2.93 vs. 74.11 ± 2.76, P = 0.467). At presentation, patients with sRNFL had greater CST than those without (454 ± 10.01 vs. 436 ± 0.23, P = 0.23). This difference persisted at the 90-day follow-up after ERM removal (402 ± 8.08 vs. 375 ± 10.19 μm, P = 0.043). The resolution of sRNFL was reported at postoperative week 1 in 30 (96.7%) of 31 cases. CONCLUSIONS Schisis of the retinal nerve fiber layer is a microstructural feature in > 50% of idiopathic ERMs in routine clinical practice and carries visual significance on presentation and anatomic significance postoperatively. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
|
22
|
Tang Y, Gao X, Wang W, Dan Y, Zhou L, Su S, Wu J, Lv H, He Y. Automated Detection of Epiretinal Membranes in OCT Images Using Deep Learning. Ophthalmic Res 2022; 66:238-246. [PMID: 36170844 DOI: 10.1159/000525929] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Development and validation of a deep learning algorithm to automatically identify and locate epiretinal memberane (ERM) regions in OCT images. METHODS OCT images of 468 eyes were retrospectively collected from a total of 404 ERM patients. One expert manually annotated the ERM regions for all images. A total of 422 images (90%) and the remainig 46 images (10%) were used as the training dataset and validation dataset for deep learning algorithm training and validation, respectively. One senior and one junior clinician read the images. The diagnostic results were compared. RESULTS The algorithm accurately segmented and located the ERM regions in OCT images. The image-level accuracy was 95.65%, and the ERM region-level accuracy was 90.14%, respectively. In comparison experiments, the accuracies of the junior clinician improved from 85.00% to 61.29% without the assistance of the algorithm to 100.00% and 90.32% with the assistance of the algorithm. The corresponding results of the senior clinician were 96.15%, 95.00% without the assistance of the algorithm, and 96.15%, 97.50% with the assistance of the algorithm. CONCLUSIONS The developed deep learning algorithm can accurately segment ERM regions in OCT images. This deep learning approach may help clinicians in clinical diagnosis with better accuracy and efficiency.
Collapse
Affiliation(s)
- Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaorong Gao
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yujiao Dan
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Linjing Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Song Su
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Hongbin Lv
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Yue He
- Department of Ophthalmology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
23
|
Miao J, Yu J, Zou W, Su N, Peng Z, Wu X, Huang J, Fang Y, Yuan S, Xie P, Huang K, Chen Q, Hu Z, Liu Q. Deep Learning Models for Segmenting Non-perfusion Area of Color Fundus Photographs in Patients With Branch Retinal Vein Occlusion. Front Med (Lausanne) 2022; 9:794045. [PMID: 35847781 PMCID: PMC9279621 DOI: 10.3389/fmed.2022.794045] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 05/30/2022] [Indexed: 11/17/2022] Open
Abstract
Purpose To develop artificial intelligence (AI)-based deep learning (DL) models for automatically detecting the ischemia type and the non-perfusion area (NPA) from color fundus photographs (CFPs) of patients with branch retinal vein occlusion (BRVO). Methods This was a retrospective analysis of 274 CFPs from patients diagnosed with BRVO. All DL models were trained using a deep convolutional neural network (CNN) based on 45 degree CFPs covering the fovea and the optic disk. We first trained a DL algorithm to identify BRVO patients with or without the necessity of retinal photocoagulation from 219 CFPs and validated the algorithm on 55 CFPs. Next, we trained another DL algorithm to segment NPA from 104 CFPs and validated it on 29 CFPs, in which the NPA was manually delineated by 3 experienced ophthalmologists according to fundus fluorescein angiography. Both DL models have been cross-validated 5-fold. The recall, precision, accuracy, and area under the curve (AUC) were used to evaluate the DL models in comparison with three types of independent ophthalmologists of different seniority. Results In the first DL model, the recall, precision, accuracy, and area under the curve (AUC) were 0.75 ± 0.08, 0.80 ± 0.07, 0.79 ± 0.02, and 0.82 ± 0.03, respectively, for predicting the necessity of laser photocoagulation for BRVO CFPs. The second DL model was able to segment NPA in CFPs of BRVO with an AUC of 0.96 ± 0.02. The recall, precision, and accuracy for segmenting NPA was 0.74 ± 0.05, 0.87 ± 0.02, and 0.89 ± 0.02, respectively. The performance of the second DL model was nearly comparable with the senior doctors and significantly better than the residents. Conclusion These results indicate that the DL models can directly identify and segment retinal NPA from the CFPs of patients with BRVO, which can further guide laser photocoagulation. Further research is needed to identify NPA of the peripheral retina in BRVO, or other diseases, such as diabetic retinopathy.
Collapse
Affiliation(s)
- Jinxin Miao
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Jiale Yu
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Wenjun Zou
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- Department of Ophthalmology, The Affiliated Wuxi No.2 People's Hospital of Nanjing Medical University, Wuxi, China
| | - Na Su
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zongyi Peng
- The First School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Xinjing Wu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Junlong Huang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yuan Fang
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Ping Xie
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science & Technology, Nanjing, China
| | - Zizhong Hu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- *Correspondence: Qinghuai Liu
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- Zizhong Hu
| |
Collapse
|
24
|
Ara RK, Matiolański A, Dziech A, Baran R, Domin P, Wieczorkiewicz A. Fast and Efficient Method for Optical Coherence Tomography Images Classification Using Deep Learning Approach. SENSORS (BASEL, SWITZERLAND) 2022; 22:4675. [PMID: 35808169 PMCID: PMC9269557 DOI: 10.3390/s22134675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/13/2022] [Accepted: 06/16/2022] [Indexed: 05/18/2023]
Abstract
The use of optical coherence tomography (OCT) in medical diagnostics is now common. The growing amount of data leads us to propose an automated support system for medical staff. The key part of the system is a classification algorithm developed with modern machine learning techniques. The main contribution is to present a new approach for the classification of eye diseases using the convolutional neural network model. The research concerns the classification of patients on the basis of OCT B-scans into one of four categories: Diabetic Macular Edema (DME), Choroidal Neovascularization (CNV), Drusen, and Normal. Those categories are available in a publicly available dataset of above 84,000 images utilized for the research. After several tested architectures, our 5-layer neural network gives us a promising result. We compared them to the other available solutions which proves the high quality of our algorithm. Equally important for the application of the algorithm is the computational time, which is reduced by the limited size of the model. In addition, the article presents a detailed method of image data augmentation and its impact on the classification results. The results of the experiments were also presented for several derived models of convolutional network architectures that were tested during the research. Improving processes in medical treatment is important. The algorithm cannot replace a doctor but, for example, can be a valuable tool for speeding up the process of diagnosis during screening tests.
Collapse
Affiliation(s)
- Rouhollah Kian Ara
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Andrzej Matiolański
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Andrzej Dziech
- Institute of Telecommunications, AGH University of Science and Technology, 30-059 Krakow, Poland; (R.K.A.); (A.D.)
| | - Remigiusz Baran
- Faculty of Electrical Engineering, Automatic Control and Computer Science, Kielce University of Technology, 25-314 Kielce, Poland;
| | - Paweł Domin
- Consultronix S.A., 32-083 Balice, Poland; (P.D.); (A.W.)
| | | |
Collapse
|
25
|
End-to-End Multi-Task Learning Approaches for the Joint Epiretinal Membrane Segmentation and Screening in OCT Images. Comput Med Imaging Graph 2022; 98:102068. [DOI: 10.1016/j.compmedimag.2022.102068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 03/28/2022] [Accepted: 04/18/2022] [Indexed: 02/07/2023]
|
26
|
Rahman L, Hafejee A, Anantharanjit R, Wei W, Cordeiro MF. Accelerating precision ophthalmology: recent advances. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2022. [DOI: 10.1080/23808993.2022.2154146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Loay Rahman
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Ammaarah Hafejee
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Wei Wei
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | | |
Collapse
|
27
|
Cai S, Han IC, Scott AW. Artificial intelligence for improving sickle cell retinopathy diagnosis and management. Eye (Lond) 2021; 35:2675-2684. [PMID: 33958737 PMCID: PMC8452674 DOI: 10.1038/s41433-021-01556-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 03/17/2021] [Accepted: 04/13/2021] [Indexed: 02/04/2023] Open
Abstract
Sickle cell retinopathy is often initially asymptomatic even in proliferative stages, but can progress to cause vision loss due to vitreous haemorrhages or tractional retinal detachments. Challenges with access and adherence to screening dilated fundus examinations, particularly in medically underserved areas where the burden of sickle cell disease is highest, highlight the need for novel approaches to screening for patients with vision-threatening sickle cell retinopathy. This article reviews the existing literature on and suggests future research directions for coupling artificial intelligence with multimodal retinal imaging to expand access to automated, accurate, imaging-based screening for sickle cell retinopathy. Given the variability in retinal specialist practice patterns with regards to monitoring and treatment of sickle cell retinopathy, we also discuss recent progress toward development of machine learning models that can quantitatively track disease progression over time. These artificial intelligence-based applications have great potential for informing evidence-based and resource-efficient clinical diagnosis and management of sickle cell retinopathy.
Collapse
Affiliation(s)
- Sophie Cai
- Retina Division, Duke Eye Center, Durham, NC, USA
| | - Ian C Han
- Institute for Vision Research, Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Adrienne W Scott
- Retina Division, Wilmer Eye Institute, Johns Hopkins University School of Medicine and Hospital, Baltimore, MD, USA.
| |
Collapse
|
28
|
Fung AT, Galvin J, Tran T. Epiretinal membrane: A review. Clin Exp Ophthalmol 2021; 49:289-308. [PMID: 33656784 DOI: 10.1111/ceo.13914] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 02/14/2021] [Accepted: 02/16/2021] [Indexed: 02/07/2023]
Abstract
The prevalence of epiretinal membrane (ERM) is 7% to 11.8%, with increasing age being the most important risk factor. Although most ERM is idiopathic, common secondary causes include cataract surgery, retinal vascular disease, uveitis and retinal tears. The myofibroblastic pre-retinal cells are thought to transdifferentiate from glial and retinal pigment epithelial cells that reach the retinal surface via defects in the internal limiting membrane (ILM) or from the vitreous cavity. Grading schemes have evolved from clinical signs to ocular coherence tomography (OCT) based classification with associated features such as the cotton ball sign. Features predictive of better prognosis include absence of ectopic inner foveal layers, cystoid macular oedema, acquired vitelliform lesions and ellipsoid and cone outer segment termination defects. OCT-angiography shows reduced size of the foveal avascular zone. Vitrectomy with membrane peeling remains the mainstay of treatment for symptomatic ERMs. Additional ILM peeling reduces recurrence but is associated with anatomical changes including inner retinal dimpling.
Collapse
Affiliation(s)
- Adrian T Fung
- Westmead Clinical School, Discipline of Ophthalmology and Eye Health, The University of Sydney, Sydney, New South Wales, Australia.,Save Sight Institute, Central Clinical School, Discipline of Ophthalmology and Eye Health, The University of Sydney, Sydney, New South Wales, Australia.,Department of Ophthalmology, Faculty of Medicine, Health and Human Sciences, Macquarie University Hospital, Sydney, New South Wales, Australia
| | - Justin Galvin
- St. Vincent's Hospital, Melbourne, Victoria, Australia
| | - Tuan Tran
- Save Sight Institute, Central Clinical School, Discipline of Ophthalmology and Eye Health, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|