1
|
Monemian M, Daneshmand PG, Rakhshani S, Rabbani H. A new texture-based labeling framework for hyper-reflective foci identification in retinal optical coherence tomography images. Sci Rep 2024; 14:22933. [PMID: 39358477 PMCID: PMC11446929 DOI: 10.1038/s41598-024-73927-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 09/23/2024] [Indexed: 10/04/2024] Open
Abstract
An important abnormality in Optical Coherence Tomography (OCT) images is Hyper-Reflective Foci (HRF). This anomaly can be interpreted as a biomarker of serious retinal diseases such as Age-related Macular Degeneration (AMD) and Diabetic Macular Edema (DME) or the progression of disease from an early stage to a late one. In this paper, a new method is proposed for the identification of HRFs. The new method divides the OCT B-scan into patches and separately verifies each patch to determine whether or not the patch contains an HRF. The procedure of patch verification contains a texture-based framework which assigns appropriate labels according to intensity changes to each column and row. Then, a feature vector is extracted for each patch based on the assigned labels. The feature vectors are utilized in the training step of well-known classifiers like Support Vector Machine (SVM). Then, the classifiers are used to produce the labels for the test OCT images. The new method is evaluated on a public dataset including HRF labels. The experimental results show that the new method is capable of providing outstanding results in terms of speed and accuracy.
Collapse
Affiliation(s)
- Maryam Monemian
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parisa Ghaderi Daneshmand
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Sajed Rakhshani
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran.
| |
Collapse
|
2
|
Agrón E, Domalpally A, Chen Q, Lu Z, Chew EY, Keenan TDL. An Updated Simplified Severity Scale for Age-Related Macular Degeneration Incorporating Reticular Pseudodrusen: Age-Related Eye Disease Study Report Number 42. Ophthalmology 2024; 131:1164-1174. [PMID: 38657840 PMCID: PMC11416341 DOI: 10.1016/j.ophtha.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 03/25/2024] [Accepted: 04/15/2024] [Indexed: 04/26/2024] Open
Abstract
PURPOSE To update the Age-Related Eye Disease Study (AREDS) simplified severity scale for risk of late age-related macular degeneration (AMD), including incorporation of reticular pseudodrusen (RPD), and to perform external validation on the Age-Related Eye Disease Study 2 (AREDS2). DESIGN Post hoc analysis of 2 clinical trial cohorts: AREDS and AREDS2. PARTICIPANTS Participants with no late AMD in either eye at baseline in AREDS (n = 2719) and AREDS2 (n = 1472). METHODS Five-year rates of progression to late AMD were calculated according to levels 0 to 4 on the simplified severity scale after 2 updates: (1) noncentral geographic atrophy (GA) considered part of the outcome, rather than a risk feature, and (2) scale separation according to RPD status (determined by validated deep learning grading of color fundus photographs). MAIN OUTCOME MEASURES Five-year rate of progression to late AMD (defined as neovascular AMD or any GA). RESULTS In the AREDS, after the first scale update, the 5-year rates of progression to late AMD for levels 0 to 4 were 0.3%, 4.5%, 12.9%, 32.2%, and 55.6%, respectively. As the final simplified severity scale, the 5-year progression rates for levels 0 to 4 were 0.3%, 4.3%, 11.6%, 26.7%, and 50.0%, respectively, for participants without RPD at baseline and 2.8%, 8.0%, 29.0%, 58.7%, and 72.2%, respectively, for participants with RPD at baseline. In external validation on the AREDS2, for levels 2 to 4, the progression rates were similar: 15.0%, 27.7%, and 45.7% (RPD absent) and 26.2%, 46.0%, and 73.0% (RPD present), respectively. CONCLUSIONS The AREDS AMD simplified severity scale has been modernized with 2 important updates. The new scale for individuals without RPD has 5-year progression rates of approximately 0.5%, 4%, 12%, 25%, and 50%, such that the rates on the original scale remain accurate. The new scale for individuals with RPD has 5-year progression rates of approximately 3%, 8%, 30%, 60%, and 70%, that is, approximately double for most levels. This scale fits updated definitions of late AMD, has increased prognostic accuracy, seems generalizable to similar populations, but remains simple for broad risk categorization. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wisconsin
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland; Biomedical Informatics and Data Science, School of Medicine, Yale University, New Haven, Connecticut
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
3
|
Kumar H, Bagdasarova Y, Song S, Hickey DG, Cohn AC, Okada M, Finger RP, Terheyden JH, Hogg RE, Gabrielle PH, Arnould L, Jannaud M, Hadoux X, van Wijngaarden P, Abbott CJ, Hodgson LAB, Schwartz R, Tufail A, Chew EY, Lee CS, Fletcher EL, Bahlo M, Ansell BRE, Pébay A, Guymer RH, Lee AY, Wu Z. Deep Learning-Based Detection of Reticular Pseudodrusen in Age-Related Macular Degeneration on Optical Coherence Tomography. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.09.11.24312817. [PMID: 39314940 PMCID: PMC11419239 DOI: 10.1101/2024.09.11.24312817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Reticular pseudodrusen (RPD) signify a critical phenotype driving vision loss in age-related macular degeneration (AMD). Their detection is paramount in the clinical management of those with AMD, yet they remain challenging to reliably identify. We thus developed a deep learning (DL) model to segment RPD from 9,800 optical coherence tomography B-scans, and this model produced RPD segmentations that had higher agreement with four retinal specialists (Dice similarity coefficient [DSC]=0·76 [95% confidence interval [CI] 0·71-0·81]) than the agreement amongst the specialists (DSC=0·68, 95% CI=0·63-0·73; p <0·001). In five external test datasets consisting of 1,017 eyes from 812 individuals, the DL model detected RPD with a similar level of performance as two retinal specialists (area-under-the-curve of 0·94 [95% CI=0·92-0·97], 0·95 [95% CI=0·92-0·97] and 0·96 [95% CI=0·94-0·98] respectively; p ≥0·32). This DL model enables the automatic detection and quantification of RPD with expert-level performance, which we have made publicly available.
Collapse
|
4
|
Akpinar MH, Sengur A, Faust O, Tong L, Molinari F, Acharya UR. Artificial intelligence in retinal screening using OCT images: A review of the last decade (2013-2023). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108253. [PMID: 38861878 DOI: 10.1016/j.cmpb.2024.108253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/22/2024] [Accepted: 05/25/2024] [Indexed: 06/13/2024]
Abstract
BACKGROUND AND OBJECTIVES Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.
Collapse
Affiliation(s)
- Muhammed Halil Akpinar
- Department of Electronics and Automation, Vocational School of Technical Sciences, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Abdulkadir Sengur
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Louis Tong
- Singapore Eye Research Institute, Singapore, Singapore
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
5
|
Wheeler TW, Hunter K, Garcia PA, Li H, Thomson AC, Hunter A, Mehanian C. Self-supervised contrastive learning improves machine learning discrimination of full thickness macular holes from epiretinal membranes in retinal OCT scans. PLOS DIGITAL HEALTH 2024; 3:e0000411. [PMID: 39186771 PMCID: PMC11346922 DOI: 10.1371/journal.pdig.0000411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 07/08/2024] [Indexed: 08/28/2024]
Abstract
There is a growing interest in using computer-assisted models for the detection of macular conditions using optical coherence tomography (OCT) data. As the quantity of clinical scan data of specific conditions is limited, these models are typically developed by fine-tuning a generalized network to classify specific macular conditions of interest. Full thickness macular holes (FTMH) present a condition requiring urgent surgical repair to prevent vision loss. Other works on automated FTMH classification have tended to use supervised ImageNet pre-trained networks with good results but leave room for improvement. In this paper, we develop a model for FTMH classification using OCT B-scans around the central foveal region to pre-train a naïve network using contrastive self-supervised learning. We found that self-supervised pre-trained networks outperform ImageNet pre-trained networks despite a small training set size (284 eyes total, 51 FTMH+ eyes, 3 B-scans from each eye). On three replicate data splits, 3D spatial contrast pre-training yields a model with an average F1-score of 1.0 on holdout data (50 eyes total, 10 FTMH+), compared to an average F1-score of 0.831 for FTMH detection by ImageNet pre-trained models. These results demonstrate that even limited data may be applied toward self-supervised pre-training to substantially improve performance for FTMH classification, indicating applicability toward other OCT-based problems.
Collapse
Affiliation(s)
- Timothy William Wheeler
- Department of Bioengineering, University of Oregon, Eugene, Oregon, United States of America
| | - Kaitlyn Hunter
- Oregon Eye Consultants, Eugene, Oregon, United States of America
| | | | - Henry Li
- Oregon Eye Consultants, Eugene, Oregon, United States of America
| | | | - Allan Hunter
- Oregon Eye Consultants, Eugene, Oregon, United States of America
| | - Courosh Mehanian
- Department of Bioengineering, University of Oregon, Eugene, Oregon, United States of America
- Global Health Labs, Bellevue, Washington, United States of America
| |
Collapse
|
6
|
Drakopoulos M, Hooshmand D, Machlab LA, Bryar PJ, Hammond KJ, Mirza RG. Machine Teaching Allows for Rapid Development of Automated Systems for Retinal Lesion Detection From Small Image Datasets. Ophthalmic Surg Lasers Imaging Retina 2024; 55:475-478. [PMID: 38752915 DOI: 10.3928/23258160-20240410-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Machine teaching, a machine learning subfield, may allow for rapid development of artificial intelligence systems able to automatically identify emerging ocular biomarkers from small imaging datasets. We sought to use machine teaching to automatically identify retinal ischemic perivascular lesions (RIPLs) and subretinal drusenoid deposits (SDDs), two emerging ocular biomarkers of cardiovascular disease. IRB approval was obtained. Four small datasets of SD-OCT B-scans were used to train and test two distinct automated systems, one identifying RIPLs and the other identifying SDDs. An open-source interactive machine-learning software program, RootPainter, was used to perform annotation and training simultaneously over a 6-hour period. For SDDs at the B-scan level, test-set accuracy = 92%, sensitivity = 100%, specificity = 88%, positive predictive value (PPV) = 82%, and negative predictive value (NPV) = 100%. For RIPLs at the B-scan level, test-set accuracy = 90%, sensitivity = 60%, specificity = 93%, PPV = 50%, and NPV = 95%. Machine teaching demonstrates promise within ophthalmic imaging to rapidly allow for automated identification of novel biomarkers from small image datasets. [Ophthalmic Surg Lasers Imaging Retina 2024;55:475-478.].
Collapse
|
7
|
Frank S, Reiter GS, Leingang O, Fuchs P, Coulibaly LM, Mares V, Bogunovic H, Schmidt-Erfurth U. ADVANCES IN PHOTORECEPTOR AND RETINAL PIGMENT EPITHELIUM QUANTIFICATIONS IN INTERMEDIATE AGE-RELATED MACULAR DEGENERATION: High-Res Versus Standard SPECTRALIS Optical Coherence Tomography. Retina 2024; 44:1351-1359. [PMID: 39047196 PMCID: PMC11280440 DOI: 10.1097/iae.0000000000004118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2024]
Abstract
PURPOSE In this study, differences in retinal feature visualization of high-resolution optical coherence tomography (OCT) devices were investigated with different axial resolutions in quantifications of retinal pigment epithelium and photoreceptors (PRs) in intermediate age-related macular degeneration. METHODS Patients were imaged with standard SPECTRALIS HRA + OCT and the investigational High-Res OCT device (both by Heidelberg Engineering, Heidelberg, Germany). Drusen, retinal pigment epithelium, and PR layers were segmented using validated artificial intelligence-based algorithms followed by manual corrections. Thickness and drusen maps were computed for all patients. Loss and thickness measurements were compared between devices, drusen versus nondrusen areas, and early treatment diabetic retinopathy study subfields using mixed-effects models. RESULTS Thirty-three eyes from 28 patients with intermediate age-related macular degeneration were included. Normalized PR integrity loss was significantly higher with 4.6% for standard OCT compared with 2.5% for High-Res OCT. The central and parafoveal PR integrity loss was larger than the perifoveal loss (P < 0.05). Photoreceptor thickness was increased on High-Res OCT and in nondrusen regions (P < 0.001). Retinal pigment epithelium appeared thicker on standard OCT and above drusen (P < 0.01). CONCLUSION Our study shows that High-Res OCT is able to identify the condition of investigated layers in intermediate age-related macular degeneration with higher precision. This improved in vivo imaging technology might promote our understanding of the pathophysiology and progression of age-related macular degeneration.
Collapse
Affiliation(s)
- Sophie Frank
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
| | - Gregor Sebastian Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
| | - Oliver Leingang
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
| | - Philipp Fuchs
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
| | - Leonard Mana Coulibaly
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
| | - Virginia Mares
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
- Department of Ophthalmology, Federal University of Minas Gerais, Belo Horizonte, Brazil; and
| | - Hrvoje Bogunovic
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria.
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria;
| |
Collapse
|
8
|
Lim JI, Rachitskaya AV, Hallak JA, Gholami S, Alam MN. Artificial intelligence for retinal diseases. Asia Pac J Ophthalmol (Phila) 2024; 13:100096. [PMID: 39209215 DOI: 10.1016/j.apjo.2024.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 08/02/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
PURPOSE To discuss the worldwide applications and potential impact of artificial intelligence (AI) for the diagnosis, management and analysis of treatment outcomes of common retinal diseases. METHODS We performed an online literature review, using PubMed Central (PMC), of AI applications to evaluate and manage retinal diseases. Search terms included AI for screening, diagnosis, monitoring, management, and treatment outcomes for age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal surgery, retinal vascular disease, retinopathy of prematurity (ROP) and sickle cell retinopathy (SCR). Additional search terms included AI and color fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). We included original research articles and review articles. RESULTS Research studies have investigated and shown the utility of AI for screening for diseases such as DR, AMD, ROP, and SCR. Research studies using validated and labeled datasets confirmed AI algorithms could predict disease progression and response to treatment. Studies showed AI facilitated rapid and quantitative interpretation of retinal biomarkers seen on OCT and OCTA imaging. Research articles suggest AI may be useful for planning and performing robotic surgery. Studies suggest AI holds the potential to help lessen the impact of socioeconomic disparities on the outcomes of retinal diseases. CONCLUSIONS AI applications for retinal diseases can assist the clinician, not only by disease screening and monitoring for disease recurrence but also in quantitative analysis of treatment outcomes and prediction of treatment response. The public health impact on the prevention of blindness from DR, AMD, and other retinal vascular diseases remains to be determined.
Collapse
Affiliation(s)
- Jennifer I Lim
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States.
| | - Aleksandra V Rachitskaya
- Department of Ophthalmology at Case Western Reserve University, Cleveland Clinic Lerner College of Medicine, Cleveland Clinic Cole Eye Institute, United States
| | - Joelle A Hallak
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States; Department of Ophthalmology and Visual Sciences, College of Medicine, University of Illinois at Chicago, Chicago, IL, United States
| | - Sina Gholami
- University of North Carolina at Charlotte, United States
| | - Minhaj N Alam
- University of North Carolina at Charlotte, United States
| |
Collapse
|
9
|
Sorrentino FS, Gardini L, Fontana L, Musa M, Gabai A, Maniaci A, Lavalle S, D’Esposito F, Russo A, Longo A, Surico PL, Gagliano C, Zeppieri M. Novel Approaches for Early Detection of Retinal Diseases Using Artificial Intelligence. J Pers Med 2024; 14:690. [PMID: 39063944 PMCID: PMC11278069 DOI: 10.3390/jpm14070690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024] Open
Abstract
BACKGROUND An increasing amount of people are globally affected by retinal diseases, such as diabetes, vascular occlusions, maculopathy, alterations of systemic circulation, and metabolic syndrome. AIM This review will discuss novel technologies in and potential approaches to the detection and diagnosis of retinal diseases with the support of cutting-edge machines and artificial intelligence (AI). METHODS The demand for retinal diagnostic imaging exams has increased, but the number of eye physicians or technicians is too little to meet the request. Thus, algorithms based on AI have been used, representing valid support for early detection and helping doctors to give diagnoses and make differential diagnosis. AI helps patients living far from hub centers to have tests and quick initial diagnosis, allowing them not to waste time in movements and waiting time for medical reply. RESULTS Highly automated systems for screening, early diagnosis, grading and tailored therapy will facilitate the care of people, even in remote lands or countries. CONCLUSION A potential massive and extensive use of AI might optimize the automated detection of tiny retinal alterations, allowing eye doctors to perform their best clinical assistance and to set the best options for the treatment of retinal diseases.
Collapse
Affiliation(s)
| | - Lorenzo Gardini
- Unit of Ophthalmology, Department of Surgical Sciences, Ospedale Maggiore, 40100 Bologna, Italy; (F.S.S.)
| | - Luigi Fontana
- Ophthalmology Unit, Department of Surgical Sciences, Alma Mater Studiorum University of Bologna, IRCCS Azienda Ospedaliero-Universitaria Bologna, 40100 Bologna, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin City 300238, Edo State, Nigeria
| | - Andrea Gabai
- Department of Ophthalmology, Humanitas-San Pio X, 20159 Milan, Italy
| | - Antonino Maniaci
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Salvatore Lavalle
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Andrea Russo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Antonio Longo
- Department of Ophthalmology, University of Catania, 95123 Catania, Italy
| | - Pier Luigi Surico
- Schepens Eye Research Institute of Mass Eye and Ear, Harvard Medical School, Boston, MA 02114, USA
- Department of Ophthalmology, Campus Bio-Medico University, 00128 Rome, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- Eye Clinic, Catania University, San Marco Hospital, Viale Carlo Azeglio Ciampi, 95121 Catania, Italy
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| |
Collapse
|
10
|
S V A, G DB, Raman R. Automatic Identification and Severity Classification of Retinal Biomarkers in SD-OCT Using Dilated Depthwise Separable Convolution ResNet with SVM Classifier. Curr Eye Res 2024; 49:513-523. [PMID: 38251704 DOI: 10.1080/02713683.2024.2303713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Diagnosis of Uveitic Macular Edema (UME) using Spectral Domain OCT (SD-OCT) is a promising method for early detection and monitoring of sight-threatening visual impairment. Viewing multiple B-scans and identifying biomarkers is challenging and time-consuming for clinical practitioners. To overcome these challenges, this paper proposes an image classification hybrid framework for predicting the presence of biomarkers such as intraretinal cysts (IRC), hyperreflective foci (HRF), hard exudates (HE) and neurosensory detachment (NSD) in OCT B-scans along with their severity. METHODS A dataset of 10880 B-scans from 85 Uveitic patients is collected and graded by two board-certified ophthalmologists for the presence of biomarkers. A novel image classification framework, Dilated Depthwise Separable Convolution ResNet (DDSC-RN) with SVM classifier, is developed to achieve network compression with a larger receptive field that captures both low and high-level features of the biomarkers without loss of classification accuracy. The severity level of each biomarker is predicted from the feature map, extracted by the proposed DDSC-RN network. RESULTS The proposed hybrid model is evaluated using ground truth labels from the hospital. The deep learning model initially, identified the presence of biomarkers in B-scans. It achieved an overall accuracy of 98.64%, which is comparable to the performance of other state-of-the-art models, such as DRN-C-42 and ResNet-34. The SVM classifier then predicted the severity of each biomarker, achieving an overall accuracy of 89.3%. CONCLUSIONS A new hybrid model accurately identifies four retinal biomarkers on a tissue map and predicts their severity. The model outperforms other methods for identifying multiple biomarkers in complex OCT B-scans. This helps clinicians to screen multiple B-scans of UME more effectively, leading to better treatment outcomes.
Collapse
Affiliation(s)
- Adithiya S V
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Dharani Bai G
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
11
|
Azoad Ahnaf SM, Saha S, Frost S, Atiqur Rahaman GM. Understanding and interpreting CNN's decision in optical coherence tomography-based AMD detection. Eur J Ophthalmol 2024; 34:803-815. [PMID: 37671441 DOI: 10.1177/11206721231199126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
INTRODUCTION Automated assessment of age-related macular degeneration (AMD) using optical coherence tomography (OCT) has gained significant research attention in recent years. Though a list of convolutional neural network (CNN)-based methods has been proposed recently, methods that uncover the decision-making process of CNNs or critically interpret CNNs' decisions in the context are scant. This study aims to bridge this research gap. METHODS We independently trained several state-of-the-art CNN models, namely, VGG16, VGG19, Xception, ResNet50, InceptionResNetV2 for AMD detection and applied CNN visualization techniques, namely, Grad-CAM, Grad-CAM++, Score CAM, Faster Score CAM to highlight the regions of interest utilized by the CNNs in the context. Retinal layer segmentation methods were also developed to explore how the CNN regions of interest related to the layers of the retinal structure. Extensive experiments involving 2130 SD-OCT scans collected from Duke University were performed. RESULTS Experimental analysis shows that Outer Nuclear Layer to Inner Segment Myeloid (ONL-ISM) influences the AMD detection decision heavily as evident from the normalized intersection (NI) scores. For AMD cases the obtained average NI scores were respectively 13.13%, 17.2%, 9.7%, 10.95%, and 11.31% for VGG16, VGG19, ResNet50, Xception, and Inception ResNet V2, whereas, for normal cases, these values were respectively 21.7%, 21.3%, 16.85%, 10.175% and 16%. CONCLUSION Critical analysis reveals that the ONL-ISM is the most contributing layer in determining AMD, followed by Nerve Fiber Layer to Inner Plexiform Layer (NFL-IPL).
Collapse
Affiliation(s)
- S M Azoad Ahnaf
- Computational Color and Spectral Image Analysis Lab, Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh
| | - Sajib Saha
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| | - Shaun Frost
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Perth, Australia
| | - G M Atiqur Rahaman
- Computational Color and Spectral Image Analysis Lab, Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh
| |
Collapse
|
12
|
Crincoli E, Sacconi R, Querques L, Querques G. Artificial intelligence in age-related macular degeneration: state of the art and recent updates. BMC Ophthalmol 2024; 24:121. [PMID: 38491380 PMCID: PMC10943791 DOI: 10.1186/s12886-024-03381-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/06/2024] [Indexed: 03/18/2024] Open
Abstract
Age related macular degeneration (AMD) represents a leading cause of vision loss and it is expected to affect 288 million people by 2040. During the last decade, machine learning technologies have shown great potential to revolutionize clinical management of AMD and support research for a better understanding of the disease. The aim of this review is to provide a panoramic description of all the applications of AI to AMD management and screening that have been analyzed in recent past literature. Deep learning (DL) can be effectively used to diagnose AMD, to predict short term risk of exudation and need for injections within the next 2 years. Moreover, DL technology has the potential to customize anti-VEGF treatment choice with a higher accuracy than expert human experts. In addition, accurate prediction of VA response to treatment can be provided to the patients with the use of ML models, which could considerably increase patients' compliance to treatment in favorable cases. Lastly, AI, especially in the form of DL, can effectively predict conversion to GA in 12 months and also suggest new biomarkers of conversion with an innovative reverse engineering approach.
Collapse
Affiliation(s)
- Emanuele Crincoli
- Ophthalmology Unit, "Fondazione Policlinico Universitario A. Gemelli IRCCS", Rome, Italy
| | - Riccardo Sacconi
- Department of Ophthalmology, University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Lea Querques
- Department of Ophthalmology, University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy
| | - Giuseppe Querques
- Department of Ophthalmology, University Vita-Salute IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy.
| |
Collapse
|
13
|
Emamverdi M, Habibi A, Ashrafkhorasani M, Nittala MG, Kadomoto S, Sadda SR. Optical Coherence Tomography Features of Macular Hyperpigmented Lesions without Intraretinal Hyperreflective Foci in Age-Related Macular Degeneration. Curr Eye Res 2024; 49:73-79. [PMID: 37937806 DOI: 10.1080/02713683.2023.2267801] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 10/02/2023] [Indexed: 11/09/2023]
Abstract
PURPOSE To evaluate the optical coherence tomography (OCT) features of hyperpigmented lesions in the absence of intraretinal hyperreflective foci (IHRF) on OCT in eyes with age-related macular degeneration (AMD). METHODS We retrospectively analyzed OCT images of eyes with intermediate AMD (iAMD) and macular hyperpigmentation (HP) on color fundus photograph (CFP) but without IHRF on OCT in the corresponding location. The most prominent or definite HP was selected for analysis. The infrared reflectance (IR) image registered with the CFP, and the location corresponding to the HP lesion were defined on the IR image. The location of the HP on the corresponding OCT B-scan was assessed for retinal pigment epithelium (RPE) elevation, acquired vitelliform lesion (AVL), abnormal retinal pigment epithelium + basal lamina (RPE + BL) band reflectivity, RPE + BL band thickening, as well as interdigitation zone (IZ), ellipsoid zone (EZ) and external limiting membrane (ELM) disruption. RESULTS 49 eyes (39 patients) were included in this study. Forty-six (94%) of the hyperpigmented lesions showed a thickened RPE + BL band. RPE + BL band reflectivity was increased in 37 (76%) of the lesions. RPE + BL band thickening, however, was not correlated with RPE + BL band reflectivity (p-value = 0.31). Either thickening or hyperreflectivity of the RPE + BL band was present in all cases. Twenty (41%) lesions had evidence of ELM disruption, 42 (86%) demonstrated EZ disruption and 48 (98%) had IZ disruption. Five (10%) HPs demonstrated AVL. Among cases with RPE elevation (15 cases, 31%), 10 were classified as drusen, 2 as drusenoid PEDs, and 3 as fibrovascular PEDs. CONCLUSIONS Thickening and/or hyperreflectivity of the RPE + BL band commonly correspond to regions of macular hyperpigmentation without IHRF in eyes with iAMD.
Collapse
Affiliation(s)
- Mehdi Emamverdi
- Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Doheny Image Reading and Research Laboratory, Doheny Eye Institute, Pasadena, CA, USA
| | - Abbas Habibi
- Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Doheny Image Reading and Research Laboratory, Doheny Eye Institute, Pasadena, CA, USA
| | - Maryam Ashrafkhorasani
- Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Doheny Image Reading and Research Laboratory, Doheny Eye Institute, Pasadena, CA, USA
| | - Muneeswar G Nittala
- Doheny Image Reading and Research Laboratory, Doheny Eye Institute, Pasadena, CA, USA
| | - Shin Kadomoto
- Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Doheny Image Reading and Research Laboratory, Doheny Eye Institute, Pasadena, CA, USA
| | - SriniVas R Sadda
- Department of Ophthalmology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Doheny Image Reading and Research Laboratory, Doheny Eye Institute, Pasadena, CA, USA
| |
Collapse
|
14
|
Heger KA, Waldstein SM. Artificial intelligence in retinal imaging: current status and future prospects. Expert Rev Med Devices 2024; 21:73-89. [PMID: 38088362 DOI: 10.1080/17434440.2023.2294364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/09/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION The steadily growing and aging world population, in conjunction with continuously increasing prevalences of vision-threatening retinal diseases, is placing an increasing burden on the global healthcare system. The main challenges within retinology involve identifying the comparatively few patients requiring therapy within the large mass, the assurance of comprehensive screening for retinal disease and individualized therapy planning. In order to sustain high-quality ophthalmic care in the future, the incorporation of artificial intelligence (AI) technologies into our clinical practice represents a potential solution. AREAS COVERED This review sheds light onto already realized and promising future applications of AI techniques in retinal imaging. The main attention is directed at the application in diabetic retinopathy and age-related macular degeneration. The principles of use in disease screening, grading, therapeutic planning and prediction of future developments are explained based on the currently available literature. EXPERT OPINION The recent accomplishments of AI in retinal imaging indicate that its implementation into our daily practice is likely to fundamentally change the ophthalmic healthcare system and bring us one step closer to the goal of individualized treatment. However, it must be emphasized that the aim is to optimally support clinicians by gradually incorporating AI approaches, rather than replacing ophthalmologists.
Collapse
Affiliation(s)
- Katharina A Heger
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| | - Sebastian M Waldstein
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| |
Collapse
|
15
|
Corvi F, Corradetti G, Laiginhas R, Liu J, Gregori G, Rosenfeld PJ, Sadda SR. Comparison between B-Scan and En Face Images for Incomplete and Complete Retinal Pigment Epithelium and Outer Retinal Atrophy. Ophthalmol Retina 2023; 7:999-1009. [PMID: 37437713 DOI: 10.1016/j.oret.2023.07.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 06/29/2023] [Accepted: 07/05/2023] [Indexed: 07/14/2023]
Abstract
PURPOSE To evaluate and compare the detection of incomplete retinal pigment epithelium and outer retinal atrophy (iRORA) and complete retinal pigment epithelium and outer retinal atrophy (cRORA) assessed on OCT B-scans versus persistent choroidal hypertransmission defects (hyperTDs) assessed by en face choroidal OCT images. DESIGN Retrospective, cross-sectional study. PARTICIPANTS Patients with late atrophic age-related macular degeneration imaged on the same day using both Spectralis OCT and Cirrus OCT. MAIN OUTCOME MEASURE Agreement between the B-scan and en face OCT for the detection of hyperTDs, cRORA, and iRORA. METHODS Two independent graders examined en face OCT and structural OCT to determine the presence and location of hyperTDs, iRORA, and cRORA. RESULTS A total of 239 iRORA and cRORA lesions were detected on the B-scans, and 249 hyperTD lesions were identified on the en face OCT images. There was no significant difference (P = 0.88) in the number of lesions. There was no significant difference in the 134 cRORA lesions identified on B-scans and the 131 hyperTDs detected on en face OCT images (P = 0.13). A total of 105 iRORA lesions were identified by B-scan assessment; however, 50 of these iRORA lesions met the criteria for persistent hyperTDs on en face OCT images (P < 0.001). When considering the topographic correspondence between B-scan and en face OCT detected lesions, the mean percentage of agreement between B-scan detection of cRORA lesions with en face OCT detection was 97.6 % (P = 0.13). CONCLUSIONS We observed high overall agreement between cRORA lesions identified on B-scans and persistent hyperTDs identified on en face OCT. However, en face imaging was able to detect iRORA lesions that had a greatest linear dimension ≥ 250 μm in a nonhorizontal en face dimension. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Federico Corvi
- Doheny Eye Institute, University of California at Los Angeles, Los Angeles, California; Stein Eye Institute, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, California; Eye Clinic, Department of Biomedical and Clinical Science "Luigi Sacco", Sacco Hospital, University of Milan, Milan, Italy
| | - Giulia Corradetti
- Doheny Eye Institute, University of California at Los Angeles, Los Angeles, California; Stein Eye Institute, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, California
| | - Rita Laiginhas
- Department of Surgery and Physiology, Faculty of Medicine, University of Porto, Porto, Portugal. Centro Hospitalar e Universitário São João, Porto, Portugal; Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Jeremy Liu
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Giovanni Gregori
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Philip J Rosenfeld
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, Florida
| | - Srinivas R Sadda
- Doheny Eye Institute, University of California at Los Angeles, Los Angeles, California; Stein Eye Institute, David Geffen School of Medicine, University of California at Los Angeles, Los Angeles, California.
| |
Collapse
|
16
|
Koseoglu ND, Grzybowski A, Liu TYA. Deep Learning Applications to Classification and Detection of Age-Related Macular Degeneration on Optical Coherence Tomography Imaging: A Review. Ophthalmol Ther 2023; 12:2347-2359. [PMID: 37493854 PMCID: PMC10441995 DOI: 10.1007/s40123-023-00775-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 07/14/2023] [Indexed: 07/27/2023] Open
Abstract
Age-related macular degeneration (AMD) is one of the leading causes of blindness in the elderly, more commonly in developed countries. Optical coherence tomography (OCT) is a non-invasive imaging device widely used for the diagnosis and management of AMD. Deep learning (DL) uses multilayered artificial neural networks (NN) for feature extraction, and is the cutting-edge technique for medical image analysis for diagnostic and prognostication purposes. Application of DL models to OCT image analysis has garnered significant interest in recent years. In this review, we aimed to summarize studies focusing on DL models used in classification and detection of AMD. Additionally, we provide a brief introduction to other DL applications in AMD, such as segmentation, prediction/prognostication, and models trained on multimodal imaging.
Collapse
Affiliation(s)
- Neslihan Dilruba Koseoglu
- Wilmer Eye Institute, Johns Hopkins University, 600 N. Wolfe St., Maumenee 726, Baltimore, MD, 21287, USA
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | - T Y Alvin Liu
- Wilmer Eye Institute, Johns Hopkins University, 600 N. Wolfe St., Maumenee 726, Baltimore, MD, 21287, USA.
| |
Collapse
|
17
|
Subashchandrabose U, John R, Anbazhagu UV, Venkatesan VK, Thyluru Ramakrishna M. Ensemble Federated Learning Approach for Diagnostics of Multi-Order Lung Cancer. Diagnostics (Basel) 2023; 13:3053. [PMID: 37835796 PMCID: PMC10572651 DOI: 10.3390/diagnostics13193053] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 09/20/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
The early detection and classification of lung cancer is crucial for improving a patient's outcome. However, the traditional classification methods are based on single machine learning models. Hence, this is limited by the availability and quality of data at the centralized computing server. In this paper, we propose an ensemble Federated Learning-based approach for multi-order lung cancer classification. This approach combines multiple machine learning models trained on different datasets allowing for improvising accuracy and generalization. Moreover, the Federated Learning approach enables the use of distributed data while ensuring data privacy and security. We evaluate the approach on a Kaggle cancer dataset and compare the results with traditional machine learning models. The results demonstrate an accuracy of 89.63% with lung cancer classification.
Collapse
Affiliation(s)
| | - Rajan John
- Department of Computer Science, College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia;
| | - Usha Veerasamy Anbazhagu
- Department of Computing Technologies, School of Computing, Faculty of Engineering and Technology, SRM Institute of Science and Technology, SRM Nagar, Kattankulathur, Chennai 603203, India;
| | - Vinoth Kumar Venkatesan
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore 632014, India
| | - Mahesh Thyluru Ramakrishna
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-Be University), Bangalore 560066, India
| |
Collapse
|
18
|
Leandro I, Lorenzo B, Aleksandar M, Dario M, Rosa G, Agostino A, Daniele T. OCT-based deep-learning models for the identification of retinal key signs. Sci Rep 2023; 13:14628. [PMID: 37670066 PMCID: PMC10480174 DOI: 10.1038/s41598-023-41362-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023] Open
Abstract
A new system based on binary Deep Learning (DL) convolutional neural networks has been developed to recognize specific retinal abnormality signs on Optical Coherence Tomography (OCT) images useful for clinical practice. Images from the local hospital database were retrospectively selected from 2017 to 2022. Images were labeled by two retinal specialists and included central fovea cross-section OCTs. Nine models were developed using the Visual Geometry Group 16 architecture to distinguish healthy versus abnormal retinas and to identify eight different retinal abnormality signs. A total of 21,500 OCT images were screened, and 10,770 central fovea cross-section OCTs were included in the study. The system achieved high accuracy in identifying healthy retinas and specific pathological signs, ranging from 93 to 99%. Accurately detecting abnormal retinal signs from OCT images is crucial for patient care. This study aimed to identify specific signs related to retinal pathologies, aiding ophthalmologists in diagnosis. The high-accuracy system identified healthy retinas and pathological signs, making it a useful diagnostic aid. Labelled OCT images remain a challenge, but our approach reduces dataset creation time and shows DL models' potential to improve ocular pathology diagnosis and clinical decision-making.
Collapse
Affiliation(s)
- Inferrera Leandro
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy.
| | - Borsatti Lorenzo
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | | | - Marangoni Dario
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Giglio Rosa
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Accardo Agostino
- Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Tognetto Daniele
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| |
Collapse
|
19
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Lecat C, Carette R, Basset F, Massin P, Rottier JB, Cochener B, Quellec G. Towards population-independent, multi-disease detection in fundus photographs. Sci Rep 2023; 13:11493. [PMID: 37460629 DOI: 10.1038/s41598-023-38610-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 07/11/2023] [Indexed: 07/20/2023] Open
Abstract
Independent validation studies of automatic diabetic retinopathy screening systems have recently shown a drop of screening performance on external data. Beyond diabetic retinopathy, this study investigates the generalizability of deep learning (DL) algorithms for screening various ocular anomalies in fundus photographs, across heterogeneous populations and imaging protocols. The following datasets are considered: OPHDIAT (France, diabetic population), OphtaMaine (France, general population), RIADD (India, general population) and ODIR (China, general population). Two multi-disease DL algorithms were developed: a Single-Dataset (SD) network, trained on the largest dataset (OPHDIAT), and a Multiple-Dataset (MD) network, trained on multiple datasets simultaneously. To assess their generalizability, both algorithms were evaluated whenever training and test data originate from overlapping datasets or from disjoint datasets. The SD network achieved a mean per-disease area under the receiver operating characteristic curve (mAUC) of 0.9571 on OPHDIAT. However, it generalized poorly to the other three datasets (mAUC < 0.9). When all four datasets were involved in training, the MD network significantly outperformed the SD network (p = 0.0058), indicating improved generality. However, in leave-one-dataset-out experiments, performance of the MD network was significantly lower on populations unseen during training than on populations involved in training (p < 0.0001), indicating imperfect generalizability.
Collapse
Affiliation(s)
- Sarah Matta
- Université de Bretagne Occidentale, Brest, Bretagne, France.
- INSERM, UMR 1101, Brest, F-29 200, France.
| | - Mathieu Lamard
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
| | - Pierre-Henri Conze
- INSERM, UMR 1101, Brest, F-29 200, France
- IMT Atlantique, Brest, F-29200, France
| | | | - Clément Lecat
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | | | - Fabien Basset
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | - Pascale Massin
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Jean-Bernard Rottier
- Bâtiment de consultation porte 14 Pôle Santé Sud CMCM, 28 Rue de Guetteloup, Le Mans, F-72100, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
- Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
20
|
Wei W, Anantharanjit R, Patel RP, Cordeiro MF. Detection of macular atrophy in age-related macular degeneration aided by artificial intelligence. Expert Rev Mol Diagn 2023:1-10. [PMID: 37144908 DOI: 10.1080/14737159.2023.2208751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
INTRODUCTION Age-related macular degeneration (AMD) is a leading cause of irreversible visual impairment worldwide. The endpoint of AMD, both in its dry or wet form, is macular atrophy (MA) which is characterized by the permanent loss of the RPE and overlying photoreceptors either in dry AMD or in wet AMD. A recognized unmet need in AMD is the early detection of MA development. AREAS COVERED Artificial Intelligence (AI) has demonstrated great impact in detection of retinal diseases, especially with its robust ability to analyze big data afforded by ophthalmic imaging modalities, such as color fundus photography (CFP), fundus autofluorescence (FAF), near-infrared reflectance (NIR), and optical coherence tomography (OCT). Among these, OCT has been shown to have great promise in identifying early MA using the new criteria in 2018. EXPERT OPINION There are few studies in which AI-OCT methods have been used to identify MA; however, results are very promising when compared to other imaging modalities. In this paper, we review the development and advances of ophthalmic imaging modalities and their combination with AI technology to detect MA in AMD. In addition, we emphasize the application of AI-OCT as an objective, cost-effective tool for the early detection and monitoring of the progression of MA in AMD.
Collapse
Affiliation(s)
- Wei Wei
- Department of Ophthalmology, Ningbo Medical Center Lihuili Hospital, Ningbo, China
- Department of Surgery & Cancer, Imperial College London, London, UK
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| | - Radhika Pooja Patel
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| | - Maria Francesca Cordeiro
- Department of Surgery & Cancer, Imperial College London, London, UK
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Ophthalmology Research Group, London, UK
- Western Eye Hospital, Imperial College Healthcare NHS trust, London, UK
| |
Collapse
|
21
|
Wang S, Wang Z, Vejalla S, Ganegoda A, Nittala MG, Sadda SR, Hu ZJ. Reverse engineering for reconstructing baseline features of dry age-related macular degeneration in optical coherence tomography. Sci Rep 2022; 12:22620. [PMID: 36587062 PMCID: PMC9805430 DOI: 10.1038/s41598-022-27140-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 12/27/2022] [Indexed: 01/01/2023] Open
Abstract
Age-related macular degeneration (AMD) is the most widespread cause of blindness and the identification of baseline AMD features or biomarkers is critical for early intervention. Optical coherence tomography (OCT) imaging produces a 3D volume consisting of cross sections of retinal tissue while fundus fluorescence (FAF) imaging produces a 2D mapping of retina. FAF has been a good standard for assessing dry AMD late-stage geographic atrophy (GA) while OCT has been used for assessing early AMD biomarkers beyond as well. However, previous approaches in large extent defined AMD features subjectively based on clinicians' observation. Deep learning-an objective artificial intelligence approach, may enable to discover 'true' salient AMD features. We develop a novel reverse engineering approach which bases on the backbone of a fully convolutional neural network to objectively identify and visualize AMD early biomarkers in OCT from baseline exams before significant atrophy occurs. Utilizing manually annotated GA regions on FAF from a follow-up visit as ground truth, we segment GA regions and reconstruct early AMD features in baseline OCT volumes. In this preliminary exploration, compared with ground truth, we achieve baseline GA segmentation accuracy of 0.95 and overlapping ratio of 0.65. The reconstructions consistently highlight that large druse and druse clusters with or without mixed hyper-reflective focus lesion on baseline OCT cause the conversion of GA after 12 months. However, hyper-reflective focus lesions and subretinal drusenoid deposit lesions alone are not seen such conversion after 12 months. Further research with larger dataset would be needed to verify these findings.
Collapse
Affiliation(s)
- Shuxian Wang
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA ,grid.10698.360000000122483208University of North Carolina at Chapel Hill, Chapel Hill, NC 27514 USA
| | - Ziyuan Wang
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA
| | - Srimanasa Vejalla
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA
| | - Anushika Ganegoda
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA
| | - Muneeswar Gupta Nittala
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA
| | - SriniVas Reddy Sadda
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA
| | - Zhihong Jewel Hu
- grid.280881.b0000 0001 0097 5623Doheny Eye Institute, 150 North Orange Grove Boulevard, Room 251, Pasadena, CA 91103 USA
| |
Collapse
|
22
|
The Need for Artificial Intelligence Based Risk Factor Analysis for Age-Related Macular Degeneration: A Review. Diagnostics (Basel) 2022; 13:diagnostics13010130. [PMID: 36611422 PMCID: PMC9818762 DOI: 10.3390/diagnostics13010130] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/16/2022] [Accepted: 12/22/2022] [Indexed: 01/04/2023] Open
Abstract
In epidemiology, a risk factor is a variable associated with increased disease risk. Understanding the role of risk factors is significant for developing a strategy to improve global health. There is strong evidence that risk factors like smoking, alcohol consumption, previous cataract surgery, age, high-density lipoprotein (HDL) cholesterol, BMI, female gender, and focal hyper-pigmentation are independently associated with age-related macular degeneration (AMD). Currently, in the literature, statistical techniques like logistic regression, multivariable logistic regression, etc., are being used to identify AMD risk factors by employing numerical/categorical data. However, artificial intelligence (AI) techniques have not been used so far in the literature for identifying risk factors for AMD. On the other hand, artificial intelligence (AI) based tools can anticipate when a person is at risk of developing chronic diseases like cancer, dementia, asthma, etc., in providing personalized care. AI-based techniques can employ numerical/categorical and/or image data thus resulting in multimodal data analysis, which provides the need for AI-based tools to be used for risk factor analysis in ophthalmology. This review summarizes the statistical techniques used to identify various risk factors and the higher benefits that AI techniques provide for AMD-related disease prediction. Additional studies are required to review different techniques for risk factor identification for other ophthalmic diseases like glaucoma, diabetic macular edema, retinopathy of prematurity, cataract, and diabetic retinopathy.
Collapse
|
23
|
Schwartz R, Khalid H, Liakopoulos S, Ouyang Y, de Vente C, González-Gonzalo C, Lee AY, Guymer R, Chew EY, Egan C, Wu Z, Kumar H, Farrington J, Müller PL, Sánchez CI, Tufail A. A Deep Learning Framework for the Detection and Quantification of Reticular Pseudodrusen and Drusen on Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:3. [PMID: 36458946 PMCID: PMC9728496 DOI: 10.1167/tvst.11.12.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
Purpose The purpose of this study was to develop and validate a deep learning (DL) framework for the detection and quantification of reticular pseudodrusen (RPD) and drusen on optical coherence tomography (OCT) scans. Methods A DL framework was developed consisting of a classification model and an out-of-distribution (OOD) detection model for the identification of ungradable scans; a classification model to identify scans with drusen or RPD; and an image segmentation model to independently segment lesions as RPD or drusen. Data were obtained from 1284 participants in the UK Biobank (UKBB) with a self-reported diagnosis of age-related macular degeneration (AMD) and 250 UKBB controls. Drusen and RPD were manually delineated by five retina specialists. The main outcome measures were sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), kappa, accuracy, intraclass correlation coefficient (ICC), and free-response receiver operating characteristic (FROC) curves. Results The classification models performed strongly at their respective tasks (0.95, 0.93, and 0.99 AUC, respectively, for the ungradable scans classifier, the OOD model, and the drusen and RPD classification models). The mean ICC for the drusen and RPD area versus graders was 0.74 and 0.61, respectively, compared with 0.69 and 0.68 for intergrader agreement. FROC curves showed that the model's sensitivity was close to human performance. Conclusions The models achieved high classification and segmentation performance, similar to human performance. Translational Relevance Application of this robust framework will further our understanding of RPD as a separate entity from drusen in both research and clinical settings.
Collapse
Affiliation(s)
- Roy Schwartz
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Health Informatics, University College London, London, UK
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Hagar Khalid
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Tanta University Hospital, Tanta, Egypt
| | - Sandra Liakopoulos
- Cologne Image Reading Center, Department of Ophthalmology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Department of Ophthalmology, Goethe University, Frankfurt, Germany
| | - Yanling Ouyang
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Coen de Vente
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Amsterdam, The Netherlands
- Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboud UMC, Nijmegen, The Netherlands
| | - Cristina González-Gonzalo
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboud UMC, Nijmegen, The Netherlands
| | - Aaron Y. Lee
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | - Robyn Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Emily Y. Chew
- National Eye Institute (NEI), National Institutes of Health (NIH), Bethesda, MD, USA
| | - Catherine Egan
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
| | - Himeesh Kumar
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, Melbourne, Australia
| | - Joseph Farrington
- Institute of Health Informatics, University College London, London, UK
| | - Philipp L. Müller
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Makula Center, Südblick Eye Centers, Augsburg, Germany
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Clara I. Sánchez
- Quantitative Healthcare Analysis (qurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam UMC location University of Amsterdam, Biomedical Engineering and Physics, Amsterdam, The Netherlands
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
24
|
Agrón E, Domalpally A, Cukras CA, Clemons TE, Chen Q, Lu Z, Chew EY, Keenan TDL. Reticular Pseudodrusen: The Third Macular Risk Feature for Progression to Late Age-Related Macular Degeneration: Age-Related Eye Disease Study 2 Report 30. Ophthalmology 2022; 129:1107-1119. [PMID: 35660417 PMCID: PMC9509418 DOI: 10.1016/j.ophtha.2022.05.021] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/17/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022] Open
Abstract
PURPOSE To analyze reticular pseudodrusen (RPD) as an independent risk factor for progression to late age-related macular degeneration (AMD), alongside traditional macular risk factors (soft drusen and pigmentary abnormalities) considered simultaneously. DESIGN Post hoc analysis of 2 clinical trial cohorts: Age-Related Eye Disease Study (AREDS) and AREDS2. PARTICIPANTS Eyes with no late AMD at baseline in AREDS (6959 eyes, 3780 participants) and AREDS2 (3355 eyes, 2056 participants). METHODS Color fundus photographs (CFPs) from annual visits were graded for soft drusen, pigmentary abnormalities, and late AMD. Presence of RPD was from grading of fundus autofluorescence images (AREDS2) and deep learning grading of CFPs (AREDS). Proportional hazards regression analyses were performed, considering AREDS AMD severity scales (modified simplified severity scale [person] and 9-step scale [eye]) and RPD presence simultaneously. MAIN OUTCOME MEASURES Progression to late AMD, geographic atrophy (GA), and neovascular AMD. RESULTS In AREDS, for late AMD analyses by person, in a model considering the simplified severity scale simultaneously, RPD presence was associated with a higher risk of progression: hazard ratio (HR), 2.15 (95% confidence interval [CI], 1.75-2.64). However, the risk associated with RPD presence differed at different severity scale levels: HR, 3.23 (95% CI, 1.60-6.51), HR, 3.81 (95% CI, 2.38-6.10), HR, 2.28 (95% CI, 1.59-3.27), and HR, 1.64 (95% CI, 1.20-2.24), at levels 0-1, 2, 3, and 4, respectively. Considering the 9-step scale (by eye), RPD presence was associated with higher risk: HR, 2.54 (95% CI, 2.07-3.13). The HRs were 5.11 (95% CI, 3.93-6.66) at levels 1-6 and 1.78 (95% CI, 1.43-2.22) at levels 7 and 8. In AREDS2, by person, RPD presence was not associated with higher risk: HR, 1.18 (95% CI, 0.90-1.56); by eye, it was HR, 1.57 (95% CI, 1.31-1.89). In both cohorts, RPD presence carried a higher risk for GA than neovascular AMD. CONCLUSIONS Reticular pseudodrusen represent an important risk factor for progression to late AMD, particularly GA. However, the added risk varies markedly by severity level, with highly increased risk at lower/moderate levels and less increased risk at higher levels. Reticular pseudodrusen status should be included in updated AMD classification systems, risk calculators, and clinical trials.
Collapse
Affiliation(s)
- Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wisconsin
| | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | | | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health (NIH), Bethesda, Maryland
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health (NIH), Bethesda, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
25
|
Wang Z, Sadda SR, Lee A, Hu ZJ. Automated segmentation and feature discovery of age-related macular degeneration and Stargardt disease via self-attended neural networks. Sci Rep 2022; 12:14565. [PMID: 36028647 PMCID: PMC9418226 DOI: 10.1038/s41598-022-18785-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 08/18/2022] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) and Stargardt disease are the leading causes of blindness for the elderly and young adults respectively. Geographic atrophy (GA) of AMD and Stargardt atrophy are their end-stage outcomes. Efficient methods for segmentation and quantification of these atrophic lesions are critical for clinical research. In this study, we developed a deep convolutional neural network (CNN) with a trainable self-attended mechanism for accurate GA and Stargardt atrophy segmentation. Compared with traditional post-hoc attention mechanisms which can only visualize CNN features, our self-attended mechanism is embedded in a fully convolutional network and directly involved in training the CNN to actively attend key features for enhanced algorithm performance. We applied the self-attended CNN on the segmentation of AMD and Stargardt atrophic lesions on fundus autofluorescence (FAF) images. Compared with a preexisting regular fully convolutional network (the U-Net), our self-attended CNN achieved 10.6% higher Dice coefficient and 17% higher IoU (intersection over union) for AMD GA segmentation, and a 22% higher Dice coefficient and a 32% higher IoU for Stargardt atrophy segmentation. With longitudinal image data having over a longer time, the developed self-attended mechanism can also be applied on the visual discovery of early AMD and Stargardt features.
Collapse
Affiliation(s)
- Ziyuan Wang
- Doheny Eye Institute, 150 N Orange Grove Blvd, Pasadena, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Srinivas Reddy Sadda
- Doheny Eye Institute, 150 N Orange Grove Blvd, Pasadena, 91103, USA
- The University of California, Los Angeles, CA, 90095, USA
| | - Aaron Lee
- The University of Washington, Seattle, WA, 98195, USA
| | - Zhihong Jewel Hu
- Doheny Eye Institute, 150 N Orange Grove Blvd, Pasadena, 91103, USA.
| |
Collapse
|
26
|
Charng J, Alam K, Swartz G, Kugelman J, Alonso-Caneiro D, Mackey DA, Chen FK. Deep learning: applications in retinal and optic nerve diseases. Clin Exp Optom 2022:1-10. [PMID: 35999058 DOI: 10.1080/08164622.2022.2111201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
Abstract
Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.
Collapse
Affiliation(s)
- Jason Charng
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Khyber Alam
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Gavin Swartz
- Department of Optometry, School of Allied Health, University of Western Australia, Perth, Australia
| | - Jason Kugelman
- School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David Alonso-Caneiro
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,School of Optometry and Vision Science, Queensland University of Technology, Brisbane, Australia
| | - David A Mackey
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | - Fred K Chen
- Centre of Ophthalmology and Visual Science (incorporating Lions Eye Institute), University of Western Australia, Perth, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia.,Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Department of Ophthalmology, Royal Perth Hospital, Western Australia, Perth, Australia
| |
Collapse
|
27
|
Ma Z, Xie Q, Xie P, Fan F, Gao X, Zhu J. HCTNet: A Hybrid ConvNet-Transformer Network for Retinal Optical Coherence Tomography Image Classification. BIOSENSORS 2022; 12:542. [PMID: 35884345 PMCID: PMC9313149 DOI: 10.3390/bios12070542] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 07/13/2022] [Accepted: 07/18/2022] [Indexed: 06/15/2023]
Abstract
Automatic and accurate optical coherence tomography (OCT) image classification is of great significance to computer-assisted diagnosis of retinal disease. In this study, we propose a hybrid ConvNet-Transformer network (HCTNet) and verify the feasibility of a Transformer-based method for retinal OCT image classification. The HCTNet first utilizes a low-level feature extraction module based on the residual dense block to generate low-level features for facilitating the network training. Then, two parallel branches of the Transformer and the ConvNet are designed to exploit the global and local context of the OCT images. Finally, a feature fusion module based on an adaptive re-weighting mechanism is employed to combine the extracted global and local features for predicting the category of OCT images in the testing datasets. The HCTNet combines the advantage of the convolutional neural network in extracting local features and the advantage of the vision Transformer in establishing long-range dependencies. A verification on two public retinal OCT datasets shows that our HCTNet method achieves an overall accuracy of 91.56% and 86.18%, respectively, outperforming the pure ViT and several ConvNet-based classification methods.
Collapse
Affiliation(s)
- Zongqing Ma
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China; (Z.M.); (Q.X.); (F.F.)
- Beijing Laboratory of Biomedical Testing Technology and Instruments, Beijing Information Science and Technology University, Beijing 100192, China
| | - Qiaoxue Xie
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China; (Z.M.); (Q.X.); (F.F.)
- Beijing Laboratory of Biomedical Testing Technology and Instruments, Beijing Information Science and Technology University, Beijing 100192, China
| | - Pinxue Xie
- Beijing Anzhen Hospital, Capital Medical University, Beijing 100029, China; (P.X.); (X.G.)
| | - Fan Fan
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China; (Z.M.); (Q.X.); (F.F.)
- Beijing Laboratory of Biomedical Testing Technology and Instruments, Beijing Information Science and Technology University, Beijing 100192, China
| | - Xinxiao Gao
- Beijing Anzhen Hospital, Capital Medical University, Beijing 100029, China; (P.X.); (X.G.)
| | - Jiang Zhu
- Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University, Beijing 100192, China; (Z.M.); (Q.X.); (F.F.)
- Beijing Laboratory of Biomedical Testing Technology and Instruments, Beijing Information Science and Technology University, Beijing 100192, China
| |
Collapse
|
28
|
Yaghy A, Lee AY, Keane PA, Keenan TDL, Mendonca LSM, Lee CS, Cairns AM, Carroll J, Chen H, Clark J, Cukras CA, de Sisternes L, Domalpally A, Durbin MK, Goetz KE, Grassmann F, Haines JL, Honda N, Hu ZJ, Mody C, Orozco LD, Owsley C, Poor S, Reisman C, Ribeiro R, Sadda SR, Sivaprasad S, Staurenghi G, Ting DS, Tumminia SJ, Zalunardo L, Waheed NK. Artificial intelligence-based strategies to identify patient populations and advance analysis in age-related macular degeneration clinical trials. Exp Eye Res 2022; 220:109092. [PMID: 35525297 PMCID: PMC9405680 DOI: 10.1016/j.exer.2022.109092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/18/2022] [Accepted: 04/20/2022] [Indexed: 11/04/2022]
Affiliation(s)
- Antonio Yaghy
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | - Pearse A Keane
- Moorfields Eye Hospital & UCL Institute of Ophthalmology, London, UK
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA; Karalis Johnson Retina Center, Seattle, WA, USA
| | | | - Joseph Carroll
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, 925 N 87th Street, Milwaukee, WI, 53226, USA
| | - Hao Chen
- Genentech, South San Francisco, CA, USA
| | | | - Catherine A Cukras
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Amitha Domalpally
- Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA
| | | | - Kerry E Goetz
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Jonathan L Haines
- Department of Population and Quantitative Health Sciences, Case Western Reserve University School of Medicine, Cleveland, OH, USA; Cleveland Institute of Computational Biology, Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | | | - Zhihong Jewel Hu
- Doheny Eye Institute, University of California, Los Angeles, CA, USA
| | | | - Luz D Orozco
- Department of Bioinformatics, Genentech, South San Francisco, CA, 94080, USA
| | - Cynthia Owsley
- Department of Ophthalmology and Visual Sciences, Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Stephen Poor
- Department of Ophthalmology, Novartis Institutes for Biomedical Research, Cambridge, MA, USA
| | | | | | - Srinivas R Sadda
- Doheny Eye Institute, David Geffen School of Medicine, University of California-Los Angeles, Los Angeles, CA, USA
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, London, UK
| | - Giovanni Staurenghi
- Department of Biomedical and Clinical Sciences Luigi Sacco, Luigi Sacco Hospital, University of Milan, Italy
| | - Daniel Sw Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Santa J Tumminia
- Office of the Director, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Nadia K Waheed
- New England Eye Center, Tufts University Medical Center, Boston, MA, USA.
| |
Collapse
|
29
|
Alexopoulos P, Madu C, Wollstein G, Schuman JS. The Development and Clinical Application of Innovative Optical Ophthalmic Imaging Techniques. Front Med (Lausanne) 2022; 9:891369. [PMID: 35847772 PMCID: PMC9279625 DOI: 10.3389/fmed.2022.891369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/23/2022] [Indexed: 11/22/2022] Open
Abstract
The field of ophthalmic imaging has grown substantially over the last years. Massive improvements in image processing and computer hardware have allowed the emergence of multiple imaging techniques of the eye that can transform patient care. The purpose of this review is to describe the most recent advances in eye imaging and explain how new technologies and imaging methods can be utilized in a clinical setting. The introduction of optical coherence tomography (OCT) was a revolution in eye imaging and has since become the standard of care for a plethora of conditions. Its most recent iterations, OCT angiography, and visible light OCT, as well as imaging modalities, such as fluorescent lifetime imaging ophthalmoscopy, would allow a more thorough evaluation of patients and provide additional information on disease processes. Toward that goal, the application of adaptive optics (AO) and full-field scanning to a variety of eye imaging techniques has further allowed the histologic study of single cells in the retina and anterior segment. Toward the goal of remote eye care and more accessible eye imaging, methods such as handheld OCT devices and imaging through smartphones, have emerged. Finally, incorporating artificial intelligence (AI) in eye images has the potential to become a new milestone for eye imaging while also contributing in social aspects of eye care.
Collapse
Affiliation(s)
- Palaiologos Alexopoulos
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Chisom Madu
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
| | - Gadi Wollstein
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
| | - Joel S. Schuman
- Department of Ophthalmology, NYU Langone Health, NYU Grossman School of Medicine, New York, NY, United States
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
- Center for Neural Science, College of Arts & Science, New York University, New York, NY, United States
- Department of Electrical and Computer Engineering, NYU Tandon School of Engineering, Brooklyn, NY, United States
| |
Collapse
|
30
|
Dow ER, Keenan TDL, Lad EM, Lee AY, Lee CS, Loewenstein A, Eydelman MB, Chew EY, Keane PA, Lim JI. From Data to Deployment: The Collaborative Community on Ophthalmic Imaging Roadmap for Artificial Intelligence in Age-Related Macular Degeneration. Ophthalmology 2022; 129:e43-e59. [PMID: 35016892 PMCID: PMC9859710 DOI: 10.1016/j.ophtha.2022.01.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 12/16/2021] [Accepted: 01/04/2022] [Indexed: 01/25/2023] Open
Abstract
OBJECTIVE Health care systems worldwide are challenged to provide adequate care for the 200 million individuals with age-related macular degeneration (AMD). Artificial intelligence (AI) has the potential to make a significant, positive impact on the diagnosis and management of patients with AMD; however, the development of effective AI devices for clinical care faces numerous considerations and challenges, a fact evidenced by a current absence of Food and Drug Administration (FDA)-approved AI devices for AMD. PURPOSE To delineate the state of AI for AMD, including current data, standards, achievements, and challenges. METHODS Members of the Collaborative Community on Ophthalmic Imaging Working Group for AI in AMD attended an inaugural meeting on September 7, 2020, to discuss the topic. Subsequently, they undertook a comprehensive review of the medical literature relevant to the topic. Members engaged in meetings and discussion through December 2021 to synthesize the information and arrive at a consensus. RESULTS Existing infrastructure for robust AI development for AMD includes several large, labeled data sets of color fundus photography and OCT images; however, image data often do not contain the metadata necessary for the development of reliable, valid, and generalizable models. Data sharing for AMD model development is made difficult by restrictions on data privacy and security, although potential solutions are under investigation. Computing resources may be adequate for current applications, but knowledge of machine learning development may be scarce in many clinical ophthalmology settings. Despite these challenges, researchers have produced promising AI models for AMD for screening, diagnosis, prediction, and monitoring. Future goals include defining benchmarks to facilitate regulatory authorization and subsequent clinical setting generalization. CONCLUSIONS Delivering an FDA-authorized, AI-based device for clinical care in AMD involves numerous considerations, including the identification of an appropriate clinical application; acquisition and development of a large, high-quality data set; development of the AI architecture; training and validation of the model; and functional interactions between the model output and clinical end user. The research efforts undertaken to date represent starting points for the medical devices that eventually will benefit providers, health care systems, and patients.
Collapse
Affiliation(s)
- Eliot R Dow
- Byers Eye Institute, Stanford University, Palo Alto, California
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Medical Center, Tel Aviv, Israel
| | - Malvina B Eydelman
- Office of Health Technology 1, Center of Devices and Radiological Health, Food and Drug Administration, Silver Spring, Maryland
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Jennifer I Lim
- Department of Ophthalmology, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
31
|
Saßmannshausen M, Thiele S, Behning C, Pfau M, Schmid M, Leal S, Luhmann UFO, Finger RP, Holz FG, Schmitz-Valckenberg S. Intersession Repeatability of Structural Biomarkers in Early and Intermediate Age-Related Macular Degeneration: A MACUSTAR Study Report. Transl Vis Sci Technol 2022; 11:27. [PMID: 35333287 PMCID: PMC8963672 DOI: 10.1167/tvst.11.3.27] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Purpose To analyze the intersession repeatability of structural biomarkers in eyes with early and intermediate age-related macular degeneration (iAMD) within the cross-sectional part of the observational multicenter MACUSTAR study. Methods Certified site personnel obtained multimodal imaging data at two visits (38 ± 20 [mean ± standard deviation] days apart), including spectral-domain optical coherence tomography (SD-OCT). One junior reader performed systematic and blinded grading at the central reading center, followed by senior reader review. Structural biomarkers included maximum drusen size classification (>63 to ≤125 µm vs. >125 µm), presence of large pigment epithelium detachments (PEDs), reticular pseudodrusen (RPD), vitelliform lesions, and refractile deposits. Intrasession variability was assessed using Cohen's κ statistics. Results At the first visit, 202 study eyes of 202 participants were graded as manifesting with either early (n = 34) or intermediate (n = 168) AMD. Grading of imaging data between visits revealed perfect agreement for the maximum drusen size classification (κ = 0.817; 95% confidence interval, 0.70–0.94). In iAMD eyes, perfect to substantial agreement was determined for the presence of large PEDs (0.87; 0.69–1.00) and RPD (0.752; 0.63–0.87), while intersession agreement was lower for the presence of vitelliform lesions (0.649; 0.39–0.65) and refractile deposits (0.342; −0.029–0.713), respectively. Conclusions Multimodal retinal imaging analysis between sessions showed a higher repeatability for structural biomarkers with predefined cutoff values than purely qualitative defined parameters. Translational Relevance A high repeatability of retinal imaging biomarkers will be important to implement automatic grading approaches and to establish robust and meaningful structural clinical endpoints for future interventional clinical trials in patients with iAMD.
Collapse
Affiliation(s)
- Marlene Saßmannshausen
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,GRADE Reading Center, University of Bonn, Bonn, Germany
| | - Sarah Thiele
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,GRADE Reading Center, University of Bonn, Bonn, Germany
| | - Charlotte Behning
- Institute of Medical Biometry, Informatics and Epidemiology, Medical Faculty, University of Bonn, Bonn, Germany
| | - Maximilian Pfau
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,GRADE Reading Center, University of Bonn, Bonn, Germany.,Ophthalmic Genetics and Visual Function Branch, National Eye Institute, Bethesda, MD, USA
| | - Matthias Schmid
- Institute of Medical Biometry, Informatics and Epidemiology, Medical Faculty, University of Bonn, Bonn, Germany
| | | | - Ulrich F O Luhmann
- Roche Pharmaceutical Research and Early Development, Translational Medicine Ophthalmology, Roche Innovation Center Basel, Basel, Switzerland
| | - Robert P Finger
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,GRADE Reading Center, University of Bonn, Bonn, Germany
| | - Steffen Schmitz-Valckenberg
- Department of Ophthalmology, University of Bonn, Bonn, Germany.,GRADE Reading Center, University of Bonn, Bonn, Germany.,John A. Moran Eye Center, Department of Ophthalmology & Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | | |
Collapse
|
32
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Ricquebourg V, Benyoussef AA, Massin P, Rottier JB, Cochener B, Quellec G. Automatic Screening for Ocular Anomalies Using Fundus Photographs. Optom Vis Sci 2022; 99:281-291. [PMID: 34897234 DOI: 10.1097/opx.0000000000001845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Screening for ocular anomalies using fundus photography is key to prevent vision impairment and blindness. With the growing and aging population, automated algorithms that can triage fundus photographs and provide instant referral decisions are relevant to scale-up screening and face the shortage of ophthalmic expertise. PURPOSE This study aimed to develop a deep learning algorithm that detects any ocular anomaly in fundus photographs and to evaluate this algorithm for "normal versus anomalous" eye examination classification in the diabetic and general populations. METHODS The deep learning algorithm was developed and evaluated in two populations: the diabetic and general populations. Our patient cohorts consist of 37,129 diabetic patients from the OPHDIAT diabetic retinopathy screening network in Paris, France, and 7356 general patients from the OphtaMaine private screening network, in Le Mans, France. Each data set was divided into a development subset and a test subset of more than 4000 examinations each. For ophthalmologist/algorithm comparison, a subset of 2014 examinations from the OphtaMaine test subset was labeled by a second ophthalmologist. First, the algorithm was trained on the OPHDIAT development subset. Then, it was fine-tuned on the OphtaMaine development subset. RESULTS On the OPHDIAT test subset, the area under the receiver operating characteristic curve for normal versus anomalous classification was 0.9592. On the OphtaMaine test subset, the area under the receiver operating characteristic curve was 0.8347 before fine-tuning and 0.9108 after fine-tuning. On the ophthalmologist/algorithm comparison subset, the second ophthalmologist achieved a specificity of 0.8648 and a sensitivity of 0.6682. For the same specificity, the fine-tuned algorithm achieved a sensitivity of 0.8248. CONCLUSIONS The proposed algorithm compares favorably with human performance for normal versus anomalous eye examination classification using fundus photography. Artificial intelligence, which previously targeted a few retinal pathologies, can be used to screen for ocular anomalies comprehensively.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Pascale Massin
- Ophtalmology Department, Lariboisière Hospital, APHP, Paris, France
| | | | | | | |
Collapse
|
33
|
Chueh KM, Hsieh YT, Chen HH, Ma IH, Huang SL. Identification of Sex and Age from Macular Optical Coherence Tomography and Feature Analysis Using Deep Learning. Am J Ophthalmol 2022; 235:221-228. [PMID: 34582766 DOI: 10.1016/j.ajo.2021.09.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 09/09/2021] [Accepted: 09/15/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To develop deep learning models for identification of sex and age from macular optical coherence tomography (OCT) and to analyze the features for differentiation of sex and age. DESIGN Algorithm development using database of macular OCT. METHODS We reviewed 6147 sets of macular OCT images from the healthy eyes of 3134 individuals from a single eye center in Taiwan. Deep learning-based algorithms were used to develop models for the identification of sex and age, and 10-fold cross-validation was applied. Gradient-weighted class activation mapping was used for feature analysis. RESULTS The accuracy for sex prediction using deep learning from macular OCT was 85.6% ± 2.1% compared with accuracy of 61.9% using macular thickness and 61.4% ± 4.0% using deep learning from infrared fundus photography (P < .001 for both). The mean absolute error for age prediction using deep learning from macular OCT was 5.78 ± 0.29 years. A thorough analysis of the prediction accuracy and the gradient-weighted class activation mapping showed that the cross-sectional foveal contour lead to a better sex distinction than macular thickness or fundus photography, and the age-related characteristics of macula were on the whole layers of retina rather than the choroid. CONCLUSIONS Sex and age could be identified from macular OCT using deep learning with good accuracy. The main sexual difference of macula lies in the foveal contour, and the whole layers of retina differ with aging. These novel findings provide useful information for further investigation in the pathogenesis of sex- and age-related macular structural diseases.
Collapse
|
34
|
Estai M, Tennant M, Gebauer D, Brostek A, Vignarajan J, Mehdizadeh M, Saha S. Evaluation of a deep learning system for automatic detection of proximal surface dental caries on bitewing radiographs. Oral Surg Oral Med Oral Pathol Oral Radiol 2022; 134:262-270. [DOI: 10.1016/j.oooo.2022.03.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 02/23/2022] [Accepted: 03/12/2022] [Indexed: 01/11/2023]
|
35
|
Thakoor KA, Yao J, Bordbar D, Moussa O, Lin W, Sajda P, Chen RWS. A multimodal deep learning system to distinguish late stages of AMD and to compare expert vs. AI ocular biomarkers. Sci Rep 2022; 12:2585. [PMID: 35173191 PMCID: PMC8850456 DOI: 10.1038/s41598-022-06273-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 01/24/2022] [Indexed: 01/08/2023] Open
Abstract
Within the next 1.5 decades, 1 in 7 U.S. adults is anticipated to suffer from age-related macular degeneration (AMD), a degenerative retinal disease which leads to blindness if untreated. Optical coherence tomography angiography (OCTA) has become a prime technique for AMD diagnosis, specifically for late-stage neovascular (NV) AMD. Such technologies generate massive amounts of data, challenging to parse by experts alone, transforming artificial intelligence into a valuable partner. We describe a deep learning (DL) approach which achieves multi-class detection of non-AMD vs. non-neovascular (NNV) AMD vs. NV AMD from a combination of OCTA, OCT structure, 2D b-scan flow images, and high definition (HD) 5-line b-scan cubes; DL also detects ocular biomarkers indicative of AMD risk. Multimodal data were used as input to 2D-3D Convolutional Neural Networks (CNNs). Both for CNNs and experts, choroidal neovascularization and geographic atrophy were found to be important biomarkers for AMD. CNNs predict biomarkers with accuracy up to 90.2% (positive-predictive-value up to 75.8%). Just as experts rely on multimodal data to diagnose AMD, CNNs also performed best when trained on multiple inputs combined. Detection of AMD and its biomarkers from OCTA data via CNNs has tremendous potential to expedite screening of early and late-stage AMD patients.
Collapse
Affiliation(s)
- Kaveri A Thakoor
- Department of Biomedical Engineering, Columbia University, New York, 10027, USA.
| | - Jiaang Yao
- Department of Electrical Engineering, Columbia University, New York, 10027, USA
| | - Darius Bordbar
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| | - Omar Moussa
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| | - Weijie Lin
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, 10027, USA
- Department of Electrical Engineering, Columbia University, New York, 10027, USA
- Department of Radiology (Physics), Columbia University, New York, 10027, USA
| | - Royce W S Chen
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York, 10032, USA
| |
Collapse
|
36
|
Rahman L, Hafejee A, Anantharanjit R, Wei W, Cordeiro MF. Accelerating precision ophthalmology: recent advances. EXPERT REVIEW OF PRECISION MEDICINE AND DRUG DEVELOPMENT 2022. [DOI: 10.1080/23808993.2022.2154146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Loay Rahman
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Ammaarah Hafejee
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Rajeevan Anantharanjit
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | - Wei Wei
- Imperial College Ophthalmology Research Group (ICORG), Imperial College Healthcare NHS Trust, London, UK
- The Imperial College Ophthalmic Research Group (ICORG), Imperial College London, London, UK
| | | |
Collapse
|
37
|
Sarici K, Abraham JR, Sevgi DD, Lunasco L, Srivastava SK, Whitney J, Cetin H, Hanumanthu A, Bell JM, Reese JL, Ehlers JP. Risk Classification for Progression to Subfoveal Geographic Atrophy in Dry Age-Related Macular Degeneration Using Machine Learning-Enabled Outer Retinal Feature Extraction. Ophthalmic Surg Lasers Imaging Retina 2022; 53:31-39. [PMID: 34982004 DOI: 10.3928/23258160-20211210-01] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
BACKGROUND AND OBJECTIVE To evaluate the utility of spectral-domain optical coherence tomography biomarkers to predict the development of subfoveal geographic atrophy (sfGA). PATIENTS AND METHODS This was a retrospective cohort analysis including 137 individuals with dry age-related macular degeneration without sfGA with 5 years of follow-up. Multiple spectral-domain optical coherence tomography quantitative metrics were generated, including ellipsoid zone (EZ) integrity and subretinal pigment epithelium (sub-RPE) compartment features. RESULTS Reduced mean EZ-RPE central subfield thickness and increased sub-RPE compartment thickness were significantly different between sfGA convertors and nonconvertors at baseline in both 2-year and 5-year sfGA risk assessment. Longitudinal change assessment showed a significantly higher degradation of EZ integrity in sfGA convertors. The predictive performance of a machine learning classification model based on 5-year and 2-year risk conversion to sfGA demonstrated an area under the receiver operating characteristic curve of 0.92 ± 0.06 and 0.96 ± 0.04, respectively. CONCLUSIONS Quantitative outer retinal and sub-RPE feature assessment using a machine learning-enabled retinal segmentation platform provides multiple parameters that are associated with progression to sfGA. [Ophthalmic Surg Lasers Imaging. 2022;53:31-39.].
Collapse
|
38
|
Wang Z, Keane PA, Chiang M, Cheung CY, Wong TY, Ting DSW. Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
39
|
Thomas A, Harikrishnan PM, Ramachandran R, Ramachandran S, Manoj R, Palanisamy P, Gopi VP. A novel multiscale and multipath convolutional neural network based age-related macular degeneration detection using OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106294. [PMID: 34364184 DOI: 10.1016/j.cmpb.2021.106294] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 07/15/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE One of the significant retinal diseases that affected older people is called Age-related Macular Degeneration (AMD). The first stage creates a blur effect on vision and later leads to central vision loss. Most people overlooked the primary stage blurring and converted it into an advanced stage. There is no proper treatment to cure the disease. So the early detection of AMD is essential to prevent its extension into the advanced stage. This paper proposes a novel deep Convolutional Neural Network (CNN) architecture to automate AMD diagnosis early from Optical Coherence Tomographic (OCT) images. METHODS The proposed architecture is a multiscale and multipath CNN with six convolutional layers. The multiscale convolution layer permits the network to produce many local structures with various filter dimensions. The multipath feature extraction permits CNN to merge more features regarding the sparse local and fine global structures. The performance of the proposed architecture is evaluated through ten-fold cross-validation methods using different classifiers like support vector machine, multi-layer perceptron, and random forest. RESULTS The proposed CNN with the random forest classifier gives the best classification accuracy results. The proposed method is tested on data set 1, data set 2, data set 3, data set 4, and achieved an accuracy of 0.9666, 0.9897, 0.9974, and 0.9978 respectively, with random forest classifier. Also, we tested the combination of first three data sets and achieved an accuracy of 0.9902. CONCLUSIONS An efficient algorithm for detecting AMD from OCT images is proposed based on a multiscale and multipath CNN architecture. Comparison with other approaches produced results that exhibit the efficiency of the proposed algorithm in the detection of AMD. The proposed architecture can be applied in rapid screening of the eye for the early detection of AMD. Due to less complexity and fewer learnable parameters.
Collapse
Affiliation(s)
- Anju Thomas
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - P M Harikrishnan
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Rajiv Ramachandran
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Srikkanth Ramachandran
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Rigved Manoj
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - P Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Varun P Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| |
Collapse
|
40
|
Fang V, Gomez-Caraballo M, Lad EM. Biomarkers for Nonexudative Age-Related Macular Degeneration and Relevance for Clinical Trials: A Systematic Review. Mol Diagn Ther 2021; 25:691-713. [PMID: 34432254 DOI: 10.1007/s40291-021-00551-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2021] [Indexed: 01/05/2023]
Abstract
TOPIC The purpose of the review was to identify structural, functional, blood-based, and other types of biomarkers for early, intermediate, and late nonexudative stages of age-related macular degeneration (AMD) and summarize the relevant data for proof-of-concept clinical trials. CLINICAL RELEVANCE AMD is a leading cause of blindness in the aging population, yet no treatments exist for its most common nonexudative form. There are limited data on the diagnosis and progression of nonexudative AMD compared to neovascular AMD. Our objective was to provide a comprehensive, systematic review of recently published biomarkers (molecular, structural, and functional) for early AMD, intermediate AMD, and geographic atrophy and to evaluate the relevance of these biomarkers for use in future clinical trials. METHODS A literature search of PubMed, ScienceDirect, EMBASE, and Web of Science from January 1, 1996 to November 30, 2020 and a patent search were conducted. Search terms included "early AMD," "dry AMD," "intermediate AMD," "biomarkers for nonexudative AMD," "fundus autofluorescence patterns," "color fundus photography," "dark adaptation," and "microperimetry." Articles were assessed for bias and quality with the Mixed-Methods Appraisal Tool. A total of 94 articles were included (61,842 individuals). RESULTS Spectral-domain optical coherence tomography was superior at highlighting detailed structural changes in earlier stages of AMD. Fundus autofluorescence patterns were found to be most important in estimating progression of geographic atrophy. Delayed rod intercept time on dark adaptation was the most widely recommended surrogate functional endpoint for early AMD, while retinal sensitivity on microperimetry was most relevant for intermediate AMD. Combinational studies accounting for various patient characteristics and machine/deep-learning approaches were best suited for assessing individualized risk of AMD onset and progression. CONCLUSION This systematic review supports the use of structural and functional biomarkers in early AMD and intermediate AMD, which are more reproducible and less invasive than the other classes of biomarkers described. The use of deep learning and combinational algorithms will gain increasing importance in future clinical trials of nonexudative AMD.
Collapse
Affiliation(s)
- Vivienne Fang
- Northwestern University Feinberg School of Medicine, 420 E. Superior St, Chicago, IL, 60611, USA
| | - Maria Gomez-Caraballo
- Department of Ophthalmology, Duke University Medical Center, 2351 Erwin Rd, DUMC 3802, Durham, NC, 27705, USA
| | - Eleonora M Lad
- Department of Ophthalmology, Duke University Medical Center, 2351 Erwin Rd, DUMC 3802, Durham, NC, 27705, USA
| |
Collapse
|
41
|
Romond K, Alam M, Kravets S, Sisternes LD, Leng T, Lim JI, Rubin D, Hallak JA. Imaging and artificial intelligence for progression of age-related macular degeneration. Exp Biol Med (Maywood) 2021; 246:2159-2169. [PMID: 34404252 DOI: 10.1177/15353702211031547] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Age-related macular degeneration (AMD) is a leading cause of severe vision loss. With our aging population, it may affect 288 million people globally by the year 2040. AMD progresses from an early and intermediate dry form to an advanced one, which manifests as choroidal neovascularization and geographic atrophy. Conversion to AMD-related exudation is known as progression to neovascular AMD, and presence of geographic atrophy is known as progression to advanced dry AMD. AMD progression predictions could enable timely monitoring, earlier detection and treatment, improving vision outcomes. Machine learning approaches, a subset of artificial intelligence applications, applied on imaging data are showing promising results in predicting progression. Extracted biomarkers, specifically from optical coherence tomography scans, are informative in predicting progression events. The purpose of this mini review is to provide an overview about current machine learning applications in artificial intelligence for predicting AMD progression, and describe the various methods, data-input types, and imaging modalities used to identify high-risk patients. With advances in computational capabilities, artificial intelligence applications are likely to transform patient care and management in AMD. External validation studies that improve generalizability to populations and devices, as well as evaluating systems in real-world clinical settings are needed to improve the clinical translations of artificial intelligence AMD applications.
Collapse
Affiliation(s)
- Kathleen Romond
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Minhaj Alam
- Department of Biomedical Data Science, Stanford University, Stanford, CA 94304, USA
| | - Sasha Kravets
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA.,Division of Epidemiology and Biostatistics, School of Public Health, University of Illinois at Chicago, Chicago, IL 60612, USA
| | | | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA 94303, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA 94304, USA
| | - Joelle A Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
42
|
Corvi F, Sadda SR. Progression of geographic atrophy. EXPERT REVIEW OF OPHTHALMOLOGY 2021. [DOI: 10.1080/17469899.2021.1951231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Federico Corvi
- Doheny Eye Institute, United States, California, United States
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, California, United States
| | - SriniVas R. Sadda
- Doheny Eye Institute, United States, California, United States
- Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, California, United States
| |
Collapse
|
43
|
Reguant R, Brunak S, Saha S. Understanding inherent image features in CNN-based assessment of diabetic retinopathy. Sci Rep 2021; 11:9704. [PMID: 33958686 PMCID: PMC8102512 DOI: 10.1038/s41598-021-89225-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/20/2021] [Indexed: 11/20/2022] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness and affects millions of people throughout the world. Early detection and timely checkups are key to reduce the risk of blindness. Automated grading of DR is a cost-effective way to ensure early detection and timely checkups. Deep learning or more specifically convolutional neural network (CNN)-based methods produce state-of-the-art performance in DR detection. Whilst CNN based methods have been proposed, no comparisons have been done between the extracted image features and their clinical relevance. Here we first adopt a CNN visualization strategy to discover the inherent image features involved in the CNN's decision-making process. Then, we critically analyze those features with respect to commonly known pathologies namely microaneurysms, hemorrhages and exudates, and other ocular components. We also critically analyze different CNNs by considering what image features they pick up during learning to predict and justify their clinical relevance. The experiments are executed on publicly available fundus datasets (EyePACS and DIARETDB1) achieving an accuracy of 89 ~ 95% with AUC, sensitivity and specificity of respectively 95 ~ 98%, 74 ~ 86%, and 93 ~ 97%, for disease level grading of DR. Whilst different CNNs produce consistent classification results, the rate of picked-up image features disagreement between models could be as high as 70%.
Collapse
Affiliation(s)
- Roc Reguant
- Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, 2200, Copenhagen N, Denmark.
- Australian E-Health Research Centre, CSIRO, Perth, Australia.
| | - Søren Brunak
- Novo Nordisk Foundation Center for Protein Research, University of Copenhagen, 2200, Copenhagen N, Denmark
| | - Sajib Saha
- Australian E-Health Research Centre, CSIRO, Perth, Australia
| |
Collapse
|
44
|
Thomas A, Sunija AP, Manoj R, Ramachandran R, Ramachandran S, Varun PG, Palanisamy P. RPE layer detection and baseline estimation using statistical methods and randomization for classification of AMD from retinal OCT. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105822. [PMID: 33190943 DOI: 10.1016/j.cmpb.2020.105822] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Accepted: 10/27/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Age-related macular degeneration (AMD) is a condition of the eye that affects the aged people. Optical coherence tomography (OCT) is a diagnostic tool capable of analyzing and identifying the disease affected retinal layers with high resolution. The objective of this work is to extract the retinal pigment epithelium (RPE) layer and the baseline (natural eye curvature, particular to every patient) from retinal spectral-domain OCT (SD-OCT) images. It uses them to find the height of drusen (abnormalities) in the RPE layer and classify it as AMD or normal. METHODS In the proposed work, the contrast enhancement based adaptive denoising technique is used for speckle elimination. Pixel grouping and iterative elimination based on the knowledge of typical layer intensities and positions are used to obtain the RPE layer. Using this estimate, randomization techniques are employed, followed by polynomial fitting and drusen removal to arrive at a baseline estimate. The classification is based on the drusen height obtained by taking the difference between the RPE and baseline levels. We have used a patient, wise classification approach where a patient is classified diseased if more than a threshold number of patient images have drusen of more than a certain height. Since all slices of an affected patient will not show drusen, we are justified in adopting this technique. RESULTS The proposed method is tested on a public data set of 2130 images/slices, which belonged to 30 patient volumes (15 AMD and 15 Normal) and achieved an overall accuracy of 96.66%, with no false positives. In comparison with existing works, the proposed method achieved higher overall accuracy and a better baseline estimate. CONCLUSIONS The proposed work focuses on AMD/normal classification using a statistical approach. It does not require any training. The proposed method modifies the motion restoration paradigm to obtain an application-specific denoising algorithm. The existing RPE detection algorithm is modified significantly to make it robust and applicable even to images where the RPE is not very evident/there is a significant amount of perforations (drusen). The baseline estimation algorithm employs a powerful combination of randomization, iterative polynomial fitting, and pixel elimination in contrast to mere fitting techniques. The main highlight of this work is, it achieved an exact estimation of the baseline in the retinal image compared to the existing methods.
Collapse
Affiliation(s)
- Anju Thomas
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - A P Sunija
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Rigved Manoj
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Rajiv Ramachandran
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - Srikkanth Ramachandran
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - P Gopi Varun
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| | - P Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamilnadu 620015, India.
| |
Collapse
|
45
|
Gong D, Kras A, Miller JB. Application of Deep Learning for Diagnosing, Classifying, and Treating Age-Related Macular Degeneration. Semin Ophthalmol 2021; 36:198-204. [PMID: 33617390 DOI: 10.1080/08820538.2021.1889617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Age-related macular degeneration (AMD) affects nearly 200 million people and is the third leading cause of irreversible vision loss worldwide. Deep learning, a branch of artificial intelligence that can learn image recognition based on pre-existing datasets, creates an opportunity for more accurate and efficient diagnosis, classification, and treatment of AMD on both individual and population levels. Current algorithms based on fundus photography and optical coherence tomography imaging have already achieved diagnostic accuracy levels comparable to human graders. This accuracy can be further increased when deep learning algorithms are simultaneously applied to multiple diagnostic imaging modalities. Combined with advances in telemedicine and imaging technology, deep learning can enable large populations of patients to be screened than would otherwise be possible and allow ophthalmologists to focus on seeing those patients who are in need of treatment, thus reducing the number of patients with significant visual impairment from AMD.
Collapse
Affiliation(s)
- Dan Gong
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA
| | - Ashley Kras
- Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| | - John B Miller
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA.,Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| |
Collapse
|
46
|
Corradetti G, Corvi F, Nittala MG, Nassisi M, Alagorie AR, Scharf J, Lee MY, Sadda SR, Sarraf D. Natural history of incomplete retinal pigment epithelial and outer retinal atrophy in age-related macular degeneration. Can J Ophthalmol 2021; 56:325-334. [PMID: 33539821 DOI: 10.1016/j.jcjo.2021.01.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Revised: 12/21/2020] [Accepted: 01/05/2021] [Indexed: 12/14/2022]
Abstract
OBJECTIVE To assess the time course and risk factors for conversion of incomplete retinal pigment epithelium and outer retina atrophy (iRORA) to complete retinal pigment epithelium and outer retina atrophy (cRORA) in eyes with non-neovascular intermediate age-related macular degeneration (iAMD), using optical coherence tomography (OCT) analysis. DESIGN Retrospective survival study. PARTICIPANTS Tracked structural Spectralis OCT (Heidelberg Engineering, Heidelberg, Germany) volume datasets from 2 retinal specialists at the University of California-Los Angeles were retrospectively screened to identify consecutive participants with non-neovascular iAMD without signs of atrophy or macular neovascularization in either eye at baseline. METHODS In the first stage of selection, 321 consecutive iAMD eyes were screened for onset of iRORA. Eyes that developed iRORA within the first 24 months were followed for an additional 24 months to assess the rate of conversion to cRORA. A Kaplan-Meier survival curve was formulated to illustrate the conversion from iRORA to cRORA. RESULTS Among 321 baseline participants with iAMD, 87 incident iRORA lesions (50 eyes, 42 participants) were included in the conversion analysis. Eighty-one iRORA lesions (93.1%) converted to cRORA within 24 months (median 14 months). Multivariate binary logistic regression analysis indicated that intraretinal hyperreflective foci and extrafoveal iRORA location at baseline were associated with a faster rate of progression to cRORA (model R2 = 0.816, p < 0.05). CONCLUSIONS The majority of incident iRORA lesions progress to cRORA within a 24-month period. These findings may be of value in the design of early intervention trials for risk stratification and prognostication but need to be validated with a prospective analysis.
Collapse
Affiliation(s)
- Giulia Corradetti
- Doheny Eye Institute, Los Angeles, Calif.; Retina Disorders and Ophthalmic Genetics, Stein Eye Institute, University of California-Los Angeles, Los Angeles, Calif
| | - Federico Corvi
- Doheny Eye Institute, Los Angeles, Calif.; Eye Clinic, Department of Biomedical and Clinical Science "Luigi Sacco," Sacco Hospital, University of Milan, Milan, Italy
| | | | - Marco Nassisi
- Doheny Eye Institute, Los Angeles, Calif.; Department of Clinical Sciences and Community Health, University of Milan, Milan, Italy; Ophthalmological Unit, Fondazione IRCCS Cà Granda, Ospedale Maggiore Policlinico, Milano, Italy
| | - Ahmed Roshdy Alagorie
- Doheny Eye Institute, Los Angeles, Calif.; Department of Ophthalmology, Faculty of Medicine, Tanta University, Tanta, Egypt
| | - Jackson Scharf
- Retina Disorders and Ophthalmic Genetics, Stein Eye Institute, University of California-Los Angeles, Los Angeles, Calif
| | - Mee Yon Lee
- Retina Disorders and Ophthalmic Genetics, Stein Eye Institute, University of California-Los Angeles, Los Angeles, Calif
| | - Srinivas R Sadda
- Doheny Eye Institute, Los Angeles, Calif.; Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, Calif
| | - David Sarraf
- Retina Disorders and Ophthalmic Genetics, Stein Eye Institute, University of California-Los Angeles, Los Angeles, Calif.; Department of Ophthalmology, David Geffen School of Medicine at UCLA, Los Angeles, Calif.; Greater Los Angeles VA Healthcare Center, Los Angeles, Calif..
| |
Collapse
|
47
|
Artificial Intelligence and Deep Learning in Ophthalmology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
48
|
Saha S, Wang Z, Sadda S, Kanagasingam Y, Hu Z. Visualizing and understanding inherent features in SD-OCT for the progression of age-related macular degeneration using deconvolutional neural networks. APPLIED AI LETTERS 2020; 1:e16. [PMID: 36478669 PMCID: PMC9725889 DOI: 10.1002/ail2.16] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
To develop a convolutional neural network visualization strategy so that optical coherence tomography (OCT) features contributing to the evolution of age-related macular degeneration (AMD) can be better determined. We have trained a U-Net model to utilize baseline OCT to predict the progression of geographic atrophy (GA), a late stage manifestation of AMD. We have augmented the U-Net architecture by attaching deconvolutional neural networks (deconvnets). Deconvnets produce the reconstructed feature maps and provide an indication regarding the inherent baseline OCT features contributing to GA progression. Experiments were conducted on longitudinal spectral domain (SD)-OCT and fundus autofluorescence images collected from 70 eyes with GA. The intensity of Bruch's membrane-outer choroid (BMChoroid) retinal junction exhibited a relative importance of 24%, in the GA progression. The intensity of the inner retinal pigment epithelium (RPE) and BM junction (InRPEBM) showed a relative importance of 22%. BMChoroid (where the AMD feature/damage of choriocapillaris was included) followed by InRPEBM (where the AMD feature/damage of RPE was included) are the layers which appear to be most relevant in predicting the progression of AMD.
Collapse
Affiliation(s)
- Sajib Saha
- Doheny Eye Institute, Los Angeles, California
- Australian e-Health Research Centre, CSIRO, Perth, Australia
| | - Ziyuan Wang
- Doheny Eye Institute, Los Angeles, California
- The University of California, Los Angeles, California
| | - Srinivas Sadda
- Doheny Eye Institute, Los Angeles, California
- The University of California, Los Angeles, California
| | | | - Zhihong Hu
- Doheny Eye Institute, Los Angeles, California
| |
Collapse
|
49
|
Structural Features Associated With the Development and Progression of RORA Secondary to Maternally Inherited Diabetes and Deafness. Am J Ophthalmol 2020; 218:136-147. [PMID: 32446735 DOI: 10.1016/j.ajo.2020.05.023] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 05/13/2020] [Accepted: 05/14/2020] [Indexed: 11/23/2022]
Abstract
PURPOSE To investigate the development and progression of retinal pigment epithelial and outer retinal atrophy (RORA) secondary to maternally inherited diabetes and deafness (MIDD). DESIGN Retrospective observational case series. METHODS Thirty-six eyes of 18 patients (age range, 22.4-71.6 years) with genetically proven MIDD and serial optical coherence tomography (OCT) images were included. As proposed reference standard to diagnose and stage atrophy, OCT images were longitudinally evaluated and analyzed for presence and precursors of RORA. RORA was defined as an area of (1) hypertransmission, (2) disruption of the retinal pigment epithelium, (3) photoreceptor degeneration, and (4) absence of other signs of a retinal pigment epithelial tear. RESULTS The majority of patients revealed areas of RORA in a circular area around the fovea of between 5° and 15° eccentricity. Over the observation time (range, 0.5-8.5 years), evidence for a consistent sequence of OCT features from earlier disease stages to the end stage of RORA could be found, starting with loss of ellipsoid zone and subretinal deposits, followed by loss of external limiting membrane and loss of retinal pigment epithelium with hypertransmission of OCT signal into the choroid, and leading to loss of the outer nuclear layer bordered by hyporeflective wedges. Outer retinal tabulations seemed to develop in regions of coalescent areas of RORA. CONCLUSIONS The development and progression of RORA could be tracked in MIDD patients using OCT images, allowing potential definition of novel surrogate markers. Similarities to OCT features in age-related macular degeneration, where mitochondrial dysfunction has been implicated in the pathogenesis, support wide-ranging benefits from proof-of-concept studies in MIDD.
Collapse
|
50
|
Quantitative Assessment of the Severity of Diabetic Retinopathy. Am J Ophthalmol 2020; 218:342-352. [PMID: 32446737 DOI: 10.1016/j.ajo.2020.05.021] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 05/13/2020] [Accepted: 05/14/2020] [Indexed: 12/28/2022]
Abstract
PURPOSE To determine whether a quantitative approach to assessment of the severity of diabetic retinopathy (DR) lesions on ultrawide field (UWF) images can provide new parameters to predict progression to proliferative diabetic retinopathy (PDR). METHODS One hundred forty six eyes from 73 participants with DR and 4 years of follow-up data were included in this post hoc analysis, which was based on a cohort of 100 diabetic patients enrolled in a previously published prospective, comparative study of UWF imaging at the Joslin Diabetes Center. Diabetic Retinopathy Severity Score level was determined at baseline and 4-year follow-up visits using mydriatic 7-standard field Early Treatment Diabetic Retinopathy Study (ETDRS) photographs. All individual DR lesions (hemorrhage [H], microaneurysm [ma], cotton wool spot [CWS], intraretinal microvascular abnormality [IRMA]) were manually segmented on stereographic projected UWF. For each lesion type, the frequency/number, surface area, and distances from the optic nerve head (ONH) were computed. These quantitative parameters were compared between eyes that progressed to PDR in 4 years and eyes that did not progress. Univariable and multivariable logistic regression analyses were performed to identify parameters that were associated with an increased risk for progression to PDR. RESULTS A total of 146 eyes of 73 subjects were included in the final analysis. The mean age of the study cohort was 53.1 years, and 42 (56.8%) subjects were female. The number and surface area of H/ma's and CWSs were significantly (P ≤ .05) higher in eyes that progressed to PDR compared with eyes that did not progress by 4 years. Similarly, H/ma's and CWSs were located further away from the ONH (ie, more peripheral) in eyes that progressed (P < .05). DR lesion parameters that conferred a statistically significant increased risk for proliferative diabetic retinopathy in the multivariate model included hemorrhage area (odds ratio [OR], 2.63; 95% confidence interval [CI], 1.25-5.53), and greater distance of hemorrhages from the ONH (OR, 1.24; 95% CI, 0.97-1.59). CONCLUSIONS Quantitative analysis of DR lesions on UWF images identifies new risk parameters for progression to PDR including the surface area of hemorrhages and the distance of hemorrhages from the ONH. Although these risk factors will need to be confirmed in larger, prospective studies, they highlight the potential for quantitative lesion analysis to inform the design of a more precise and complete staging system for diabetic retinopathy severity in the future. NOTE: Publication of this article is sponsored by the American Ophthalmological Society.
Collapse
|