1
|
Ferro Desideri L, Danilovska T, Bernardi E, Artemiev D, Paschon K, Hayoz M, Jungo A, Sznitman R, Zinkernagel MS, Anguita R. Artificial Intelligence-Enhanced OCT Biomarkers Analysis in Macula-off Rhegmatogenous Retinal Detachment Patients. Transl Vis Sci Technol 2024; 13:21. [PMID: 39392437 PMCID: PMC11472884 DOI: 10.1167/tvst.13.10.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 08/14/2024] [Indexed: 10/12/2024] Open
Abstract
Purpose To identify optical coherence tomography (OCT) biomarkers for macula-off rhegmatogenous retinal detachment (RRD) with artificial intelligence (AI) and to correlate these biomarkers with functional outcomes. Methods Patients with macula-off RRD treated with single vitrectomy and gas tamponade were included. OCT volumes, taken at 4 to 6 weeks and 1 year postoperative, were uploaded on an AI-derived platform (Discovery OCT Biomarker Detector; RetinAI AG, Bern, Switzerland), measuring different retinal layer thicknesses, including outer nuclear layer (ONL), photoreceptor and retinal pigmented epithelium (PR + RPE), intraretinal fluid (IRF), subretinal fluid, and biomarker probability detection, including hyperreflective foci (HF). A random forest model assessed the predictive factors for final best-corrected visual acuity (BCVA). Results Fifty-nine patients (42 male, 17 female) were enrolled. Baseline BCVA was 0.5 logarithmic minimum angle of resolution (logMAR) ± 0.1, significantly improving to 0.3 ± 0.1 logMAR at the final visit (P < 0.001). Average thickness analysis indicated a significant increase after the last follow-up visit for ONL (from 95.16 ± 5.47 µm to 100.8 ± 5.27 µm, P = 0.0007) and PR + RPE thicknesses (60.9 ± 2.6 µm to 66.2 ± 1.8 µm, P = 0.0001). Average occurrence rate of HF was 0.12 ± 0.06 at initial visit and 0.08 ± 0.05 at last follow-up visit (P = 0.0093). Random forest model revealed baseline BCVA as the most critical predictor for final BCVA, followed by ONL thickness, HF, and IRF presence at the initial visit. Conclusions Increased ONL and PR-RPE thickness associate with better outcomes, while HF presence indicates poorer results, with initial BCVA remaining a primary visual predictor. Translational Relevance The study underscores the role of novel biomarkers like HF in understanding visual function in macula-off RRD.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
- Bern Photographic Reading Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Tamara Danilovska
- ARTORG Research Center Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Enrico Bernardi
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Dmitri Artemiev
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Karin Paschon
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Michel Hayoz
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Alain Jungo
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Martin S. Zinkernagel
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
- Bern Photographic Reading Center, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Rodrigo Anguita
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| |
Collapse
|
2
|
de Vente C, van Ginneken B, Hoyng CB, Klaver CCW, Sánchez CI. Uncertainty-aware multiple-instance learning for reliable classification: Application to optical coherence tomography. Med Image Anal 2024; 97:103259. [PMID: 38959721 DOI: 10.1016/j.media.2024.103259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 06/17/2024] [Accepted: 06/24/2024] [Indexed: 07/05/2024]
Abstract
Deep learning classification models for medical image analysis often perform well on data from scanners that were used to acquire the training data. However, when these models are applied to data from different vendors, their performance tends to drop substantially. Artifacts that only occur within scans from specific scanners are major causes of this poor generalizability. We aimed to enhance the reliability of deep learning classification models using a novel method called Uncertainty-Based Instance eXclusion (UBIX). UBIX is an inference-time module that can be employed in multiple-instance learning (MIL) settings. MIL is a paradigm in which instances (generally crops or slices) of a bag (generally an image) contribute towards a bag-level output. Instead of assuming equal contribution of all instances to the bag-level output, UBIX detects instances corrupted due to local artifacts on-the-fly using uncertainty estimation, reducing or fully ignoring their contributions before MIL pooling. In our experiments, instances are 2D slices and bags are volumetric images, but alternative definitions are also possible. Although UBIX is generally applicable to diverse classification tasks, we focused on the staging of age-related macular degeneration in optical coherence tomography. Our models were trained on data from a single scanner and tested on external datasets from different vendors, which included vendor-specific artifacts. UBIX showed reliable behavior, with a slight decrease in performance (a decrease of the quadratic weighted kappa (κw) from 0.861 to 0.708), when applied to images from different vendors containing artifacts; while a state-of-the-art 3D neural network without UBIX suffered from a significant detriment of performance (κw from 0.852 to 0.084) on the same test set. We showed that instances with unseen artifacts can be identified with OOD detection. UBIX can reduce their contribution to the bag-level predictions, improving reliability without retraining on new data. This potentially increases the applicability of artificial intelligence models to data from other scanners than the ones for which they were developed. The source code for UBIX, including trained model weights, is publicly available through https://github.com/qurAI-amsterdam/ubix-for-reliable-classification.
Collapse
Affiliation(s)
- Coen de Vente
- Quantitative Healthcare Analysis (QurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, Noord-Holland, Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, Noord-Holland, Netherlands; Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, Gelderland, Netherlands.
| | - Bram van Ginneken
- Diagnostic Image Analysis Group (DIAG), Department of Radiology and Nuclear Medicine, Radboudumc, Nijmegen, Gelderland, Netherlands
| | - Carel B Hoyng
- Department of Ophthalmology, Radboudumc, Nijmegen, Gelderland, Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Radboudumc, Nijmegen, Gelderland, Netherlands; Ophthalmology & Epidemiology, Erasmus MC, Rotterdam, Zuid-Holland, Netherlands
| | - Clara I Sánchez
- Quantitative Healthcare Analysis (QurAI) Group, Informatics Institute, University of Amsterdam, Amsterdam, Noord-Holland, Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, Noord-Holland, Netherlands
| |
Collapse
|
3
|
Akpinar MH, Sengur A, Faust O, Tong L, Molinari F, Acharya UR. Artificial intelligence in retinal screening using OCT images: A review of the last decade (2013-2023). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108253. [PMID: 38861878 DOI: 10.1016/j.cmpb.2024.108253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/22/2024] [Accepted: 05/25/2024] [Indexed: 06/13/2024]
Abstract
BACKGROUND AND OBJECTIVES Optical coherence tomography (OCT) has ushered in a transformative era in the domain of ophthalmology, offering non-invasive imaging with high resolution for ocular disease detection. OCT, which is frequently used in diagnosing fundamental ocular pathologies, such as glaucoma and age-related macular degeneration (AMD), plays an important role in the widespread adoption of this technology. Apart from glaucoma and AMD, we will also investigate pertinent pathologies, such as epiretinal membrane (ERM), macular hole (MH), macular dystrophy (MD), vitreomacular traction (VMT), diabetic maculopathy (DMP), cystoid macular edema (CME), central serous chorioretinopathy (CSC), diabetic macular edema (DME), diabetic retinopathy (DR), drusen, glaucomatous optic neuropathy (GON), neovascular AMD (nAMD), myopia macular degeneration (MMD) and choroidal neovascularization (CNV) diseases. This comprehensive review examines the role that OCT-derived images play in detecting, characterizing, and monitoring eye diseases. METHOD The 2020 PRISMA guideline was used to structure a systematic review of research on various eye conditions using machine learning (ML) or deep learning (DL) techniques. A thorough search across IEEE, PubMed, Web of Science, and Scopus databases yielded 1787 publications, of which 1136 remained after removing duplicates. Subsequent exclusion of conference papers, review papers, and non-open-access articles reduced the selection to 511 articles. Further scrutiny led to the exclusion of 435 more articles due to lower-quality indexing or irrelevance, resulting in 76 journal articles for the review. RESULTS During our investigation, we found that a major challenge for ML-based decision support is the abundance of features and the determination of their significance. In contrast, DL-based decision support is characterized by a plug-and-play nature rather than relying on a trial-and-error approach. Furthermore, we observed that pre-trained networks are practical and especially useful when working on complex images such as OCT. Consequently, pre-trained deep networks were frequently utilized for classification tasks. Currently, medical decision support aims to reduce the workload of ophthalmologists and retina specialists during routine tasks. In the future, it might be possible to create continuous learning systems that can predict ocular pathologies by identifying subtle changes in OCT images.
Collapse
Affiliation(s)
- Muhammed Halil Akpinar
- Department of Electronics and Automation, Vocational School of Technical Sciences, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Abdulkadir Sengur
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Louis Tong
- Singapore Eye Research Institute, Singapore, Singapore
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
4
|
Aresta G, Araujo T, Reiter GS, Mai J, Riedl S, Grechenig C, Guymer RH, Wu Z, Schmidt-Erfurth U, Bogunovic H. Deep Neural Networks for Automated Outer Plexiform Layer Subsidence Detection on Retinal OCT of Patients With Intermediate AMD. Transl Vis Sci Technol 2024; 13:7. [PMID: 38874975 PMCID: PMC11182370 DOI: 10.1167/tvst.13.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 04/04/2024] [Indexed: 06/15/2024] Open
Abstract
Purpose The subsidence of the outer plexiform layer (OPL) is an important imaging biomarker on optical coherence tomography (OCT) associated with early outer retinal atrophy and a risk factor for progression to geographic atrophy in patients with intermediate age-related macular degeneration (AMD). Deep neural networks (DNNs) for OCT can support automated detection and localization of this biomarker. Methods The method predicts potential OPL subsidence locations on retinal OCTs. A detection module (DM) infers bounding boxes around subsidences with a likelihood score, and a classification module (CM) assesses subsidence presence at the B-scan level. Overlapping boxes between B-scans are combined and scored by the product of the DM and CM predictions. The volume-wise score is the maximum prediction across all B-scans. One development and one independent external data set were used with 140 and 26 patients with AMD, respectively. Results The system detected more than 85% of OPL subsidences with less than one false-positive (FP)/scan. The average area under the curve was 0.94 ± 0.03 for volume-level detection. Similar or better performance was achieved on the independent external data set. Conclusions DNN systems can efficiently perform automated retinal layer subsidence detection in retinal OCT images. In particular, the proposed DNN system detects OPL subsidence with high sensitivity and a very limited number of FP detections. Translational Relevance DNNs enable objective identification of early signs associated with high risk of progression to the atrophic late stage of AMD, ideally suited for screening and assessing the efficacy of the interventions aiming to slow disease progression.
Collapse
Affiliation(s)
- Guilherme Aresta
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
| | - Teresa Araujo
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
| | - Gregor S. Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Julia Mai
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Christoph Grechenig
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Robyn H. Guymer
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University Vienna, Vienna, Austria
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
5
|
Liu X, Zhu X, Zhang Y, Wang M. Point based weakly semi-supervised biomarker detection with cross-scale and label assignment in retinal OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108229. [PMID: 38761413 DOI: 10.1016/j.cmpb.2024.108229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 04/09/2024] [Accepted: 05/14/2024] [Indexed: 05/20/2024]
Abstract
BACKGROUND AND OBJECTIVE Optical coherence tomography (OCT) is currently one of the most advanced retinal imaging methods. Retinal biomarkers in OCT images are of clinical significance and can assist ophthalmologists in diagnosing lesions. Compared with fundus images, OCT can provide higher resolution segmentation. However, image annotation at the bounding box level needs to be performed by ophthalmologists carefully and is difficult to obtain. In addition, the large variation in shape of different retinal markers and the inconspicuous appearance of biomarkers make it difficult for existing deep learning-based methods to effectively detect them. To overcome the above challenges, we propose a novel network for the detection of retinal biomarkers in OCT images. METHODS We first address the issue of labeling cost using a novel weakly semi-supervised object detection method with point annotations which can reduce bounding box-level annotation efforts. To extend the method to the detection of biomarkers in OCT images, we propose multiple consistent regularizations for point-to-box regression network to deal with the shortage of supervision, which aims to learn more accurate regression mappings. Furthermore, in the subsequent fully supervised detection, we propose a cross-scale feature enhancement module to alleviate the detection problems caused by the large-scale variation of biomarkers. We also propose a dynamic label assignment strategy to distinguish samples of different importance more flexibly, thereby reducing detection errors due to the indistinguishable appearance of the biomarkers. RESULTS When using our detection network, our regressor also achieves an AP value of 20.83 s when utilizing a 5 % fully labeled dataset partition, surpassing the performance of other comparative methods at 5 % and 10 %. Even coming close to the 20.87 % result achieved by Point DETR under 20 % full labeling conditions. When using Group R-CNN as the point-to-box regressor, our detector achieves 27.21 % AP in the 50 % fully labeled dataset experiment. 7.42 % AP improvement is achieved compared to our detection network baseline Faster R-CNN. CONCLUSIONS The experimental findings not only demonstrate the effectiveness of our approach with minimal bounding box annotations but also highlight the enhanced biomarker detection performance of the proposed module. We have included a detailed algorithmic flow in the supplementary material.
Collapse
Affiliation(s)
- Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, PR China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan 430065, PR China.
| | - Xin Zhu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, PR China; Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan 430065, PR China
| | - Ying Zhang
- Aier Eye Hospital of Wuhan University, Wuhan 430064, PR China
| | - Man Wang
- Aier Eye Hospital of Wuhan University, Wuhan 430064, PR China
| |
Collapse
|
6
|
Ferro Desideri L, Anguita R, Berger LE, Feenstra HMA, Scandella D, Sznitman R, Boon CJF, van Dijk EHC, Zinkernagel MS. Analysis of optical coherence tomography biomarker probability detection in central serous chorioretinopathy by using an artificial intelligence-based biomarker detector. Int J Retina Vitreous 2024; 10:42. [PMID: 38822446 PMCID: PMC11140908 DOI: 10.1186/s40942-024-00560-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 05/27/2024] [Indexed: 06/03/2024] Open
Abstract
AIM To adopt a novel artificial intelligence (AI) optical coherence tomography (OCT)-based program to identify the presence of biomarkers associated with central serous chorioretinopathy (CSC) and whether these can differentiate between acute and chronic central serous chorioretinopathy (aCSC and cCSC). METHODS Multicenter, observational study with a retrospective design enrolling treatment-naïve patients with aCSC and cCSC. The diagnosis of aCSC and cCSC was established with multimodal imaging and for the current study subsequent follow-up visits were also considered. Baseline OCTs were analyzed by an AI-based platform (Discovery® OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). This software allows to detect several different biomarkers in each single OCT scan, including subretinal fluid (SRF), intraretinal fluid (IRF), hyperreflective foci (HF) and flat irregular pigment epithelium detachment (FIPED). The presence of SRF was considered as a necessary inclusion criterion for performing biomarker analysis and OCT slabs without SRF presence were excluded from the analysis. RESULTS Overall, 160 eyes of 144 patients with CSC were enrolled, out of which 100 (62.5%) eyes were diagnosed with cCSC and 60 eyes (34.5%) with aCSC. In the OCT slabs showing presence of SRF the presence of biomarkers was found to be clinically relevant (> 50%) for HF and FIPED in aCSC and cCSC. HF had an average percentage of 81% (± 20) in the cCSC group and 81% (± 15) in the aCSC group (p = 0.4295) and FIPED had a mean percentage of 88% (± 18) in cCSC vs. 89% (± 15) in the aCSC (p = 0.3197). CONCLUSION We demonstrate that HF and FIPED are OCT biomarkers positively associated with CSC when present at baseline. While both HF and FIPED biomarkers could aid in CSC diagnosis, they could not distinguish between aCSC and cCSC at the first visit. AI-assisted biomarker detection shows promise for reducing invasive imaging needs, but further validation through longitudinal studies is needed.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 15, Bern, CH-3010, Switzerland.
- Department for Bio Medical Research, University of Bern, Murtenstrasse 24, Bern, CH-3008, Switzerland.
- Bern Photographic Reading Center, Inselspital, University Hospital Bern, Bern, 3010, Switzerland.
| | - Rodrigo Anguita
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 15, Bern, CH-3010, Switzerland
- Moorfields Eye Hospital, NHS Foundation Trust, London, UK
| | - Lieselotte E Berger
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 15, Bern, CH-3010, Switzerland
- Department for Bio Medical Research, University of Bern, Murtenstrasse 24, Bern, CH-3008, Switzerland
| | - Helena M A Feenstra
- Department of Ophthalmology, Leiden University Medical Center, Leiden, the Netherlands
| | - Davide Scandella
- ARTORG Research Center Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Research Center Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Camiel J F Boon
- Department of Ophthalmology, Leiden University Medical Center, Leiden, the Netherlands
- Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Elon H C van Dijk
- Department of Ophthalmology, Leiden University Medical Center, Leiden, the Netherlands
| | - Martin S Zinkernagel
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 15, Bern, CH-3010, Switzerland
- Department for Bio Medical Research, University of Bern, Murtenstrasse 24, Bern, CH-3008, Switzerland
- Bern Photographic Reading Center, Inselspital, University Hospital Bern, Bern, 3010, Switzerland
| |
Collapse
|
7
|
S V A, G DB, Raman R. Automatic Identification and Severity Classification of Retinal Biomarkers in SD-OCT Using Dilated Depthwise Separable Convolution ResNet with SVM Classifier. Curr Eye Res 2024; 49:513-523. [PMID: 38251704 DOI: 10.1080/02713683.2024.2303713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Diagnosis of Uveitic Macular Edema (UME) using Spectral Domain OCT (SD-OCT) is a promising method for early detection and monitoring of sight-threatening visual impairment. Viewing multiple B-scans and identifying biomarkers is challenging and time-consuming for clinical practitioners. To overcome these challenges, this paper proposes an image classification hybrid framework for predicting the presence of biomarkers such as intraretinal cysts (IRC), hyperreflective foci (HRF), hard exudates (HE) and neurosensory detachment (NSD) in OCT B-scans along with their severity. METHODS A dataset of 10880 B-scans from 85 Uveitic patients is collected and graded by two board-certified ophthalmologists for the presence of biomarkers. A novel image classification framework, Dilated Depthwise Separable Convolution ResNet (DDSC-RN) with SVM classifier, is developed to achieve network compression with a larger receptive field that captures both low and high-level features of the biomarkers without loss of classification accuracy. The severity level of each biomarker is predicted from the feature map, extracted by the proposed DDSC-RN network. RESULTS The proposed hybrid model is evaluated using ground truth labels from the hospital. The deep learning model initially, identified the presence of biomarkers in B-scans. It achieved an overall accuracy of 98.64%, which is comparable to the performance of other state-of-the-art models, such as DRN-C-42 and ResNet-34. The SVM classifier then predicted the severity of each biomarker, achieving an overall accuracy of 89.3%. CONCLUSIONS A new hybrid model accurately identifies four retinal biomarkers on a tissue map and predicts their severity. The model outperforms other methods for identifying multiple biomarkers in complex OCT B-scans. This helps clinicians to screen multiple B-scans of UME more effectively, leading to better treatment outcomes.
Collapse
Affiliation(s)
- Adithiya S V
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Dharani Bai G
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
8
|
Schlosser T, Beuth F, Meyer T, Kumar AS, Stolze G, Furashova O, Engelmann K, Kowerko D. Visual acuity prediction on real-life patient data using a machine learning based multistage system. Sci Rep 2024; 14:5532. [PMID: 38448469 PMCID: PMC10917755 DOI: 10.1038/s41598-024-54482-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 02/12/2024] [Indexed: 03/08/2024] Open
Abstract
In ophthalmology, intravitreal operative medication therapy (IVOM) is a widespread treatment for diseases related to the age-related macular degeneration (AMD), the diabetic macular edema, as well as the retinal vein occlusion. However, in real-world settings, patients often suffer from loss of vision on time scales of years despite therapy, whereas the prediction of the visual acuity (VA) and the earliest possible detection of deterioration under real-life conditions is challenging due to heterogeneous and incomplete data. In this contribution, we present a workflow for the development of a research-compatible data corpus fusing different IT systems of the department of ophthalmology of a German maximum care hospital. The extensive data corpus allows predictive statements of the expected progression of a patient and his or her VA in each of the three diseases. For the disease AMD, we found out a significant deterioration of the visual acuity over time. Within our proposed multistage system, we subsequently classify the VA progression into the three groups of therapy "winners", "stabilizers", and "losers" (WSL classification scheme). Our OCT biomarker classification using an ensemble of deep neural networks results in a classification accuracy (F1-score) of over 98%, enabling us to complete incomplete OCT documentations while allowing us to exploit them for a more precise VA modelling process. Our VA prediction requires at least four VA examinations and optionally OCT biomarkers from the same time period to predict the VA progression within a forecasted time frame, whereas our prediction is currently restricted to IVOM/no therapy. We achieve a final prediction accuracy of 69% in macro average F1-score, while being in the same range as the ophthalmologists with 57.8 and 50 ± 10.7 % F1-score.
Collapse
Affiliation(s)
- Tobias Schlosser
- Junior Professorship of Media Computing, Chemnitz University of Technology, 09107, Chemnitz, Germany.
| | - Frederik Beuth
- Junior Professorship of Media Computing, Chemnitz University of Technology, 09107, Chemnitz, Germany
| | - Trixy Meyer
- Junior Professorship of Media Computing, Chemnitz University of Technology, 09107, Chemnitz, Germany
| | - Arunodhayan Sampath Kumar
- Junior Professorship of Media Computing, Chemnitz University of Technology, 09107, Chemnitz, Germany
| | - Gabriel Stolze
- Department of Ophthalmology, Klinikum Chemnitz gGmbH, 09116, Chemnitz, Germany
| | - Olga Furashova
- Department of Ophthalmology, Klinikum Chemnitz gGmbH, 09116, Chemnitz, Germany
| | - Katrin Engelmann
- Department of Ophthalmology, Klinikum Chemnitz gGmbH, 09116, Chemnitz, Germany
| | - Danny Kowerko
- Junior Professorship of Media Computing, Chemnitz University of Technology, 09107, Chemnitz, Germany.
| |
Collapse
|
9
|
Tejero JG, Neila PM, Kurmann T, Gallardo M, Zinkernagel M, Wolf S, Sznitman R. Predicting OCT biological marker localization from weak annotations. Sci Rep 2023; 13:19667. [PMID: 37952011 PMCID: PMC10640596 DOI: 10.1038/s41598-023-47019-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 11/08/2023] [Indexed: 11/14/2023] Open
Abstract
Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.
Collapse
Affiliation(s)
- Javier Gamazo Tejero
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland.
| | - Pablo Márquez Neila
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Thomas Kurmann
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Mathias Gallardo
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| | - Martin Zinkernagel
- Department of Ophthalmology, Bern University Hospital, 3010, Bern, Switzerland
| | - Sebastian Wolf
- Department of Ophthalmology, Bern University Hospital, 3010, Bern, Switzerland
| | - Raphael Sznitman
- Artificial Intelligence in Medical Imaging, University of Bern, 3008, Bern, Switzerland
| |
Collapse
|
10
|
Leingang O, Riedl S, Mai J, Reiter GS, Faustmann G, Fuchs P, Scholl HPN, Sivaprasad S, Rueckert D, Lotery A, Schmidt-Erfurth U, Bogunović H. Automated deep learning-based AMD detection and staging in real-world OCT datasets (PINNACLE study report 5). Sci Rep 2023; 13:19545. [PMID: 37945665 PMCID: PMC10636170 DOI: 10.1038/s41598-023-46626-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/03/2023] [Indexed: 11/12/2023] Open
Abstract
Real-world retinal optical coherence tomography (OCT) scans are available in abundance in primary and secondary eye care centres. They contain a wealth of information to be analyzed in retrospective studies. The associated electronic health records alone are often not enough to generate a high-quality dataset for clinical, statistical, and machine learning analysis. We have developed a deep learning-based age-related macular degeneration (AMD) stage classifier, to efficiently identify the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage of AMD in retrospective data. We trained a two-stage convolutional neural network to classify macula-centered 3D volumes from Topcon OCT images into 4 classes: Normal, iAMD, GA and nAMD. In the first stage, a 2D ResNet50 is trained to identify the disease categories on the individual OCT B-scans while in the second stage, four smaller models (ResNets) use the concatenated B-scan-wise output from the first stage to classify the entire OCT volume. Classification uncertainty estimates are generated with Monte-Carlo dropout at inference time. The model was trained on a real-world OCT dataset, 3765 scans of 1849 eyes, and extensively evaluated, where it reached an average ROC-AUC of 0.94 in a real-world test set.
Collapse
Affiliation(s)
- Oliver Leingang
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Sophie Riedl
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Julia Mai
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Gregor S Reiter
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Georg Faustmann
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Philipp Fuchs
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hendrik P N Scholl
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Ophthalmology, University of Basel, Basel, Switzerland
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Daniel Rueckert
- BioMedIA, Imperial College London, London, UK
- Institute for AI and Informatics in Medicine, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Andrew Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - Hrvoje Bogunović
- Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
11
|
Leandro I, Lorenzo B, Aleksandar M, Dario M, Rosa G, Agostino A, Daniele T. OCT-based deep-learning models for the identification of retinal key signs. Sci Rep 2023; 13:14628. [PMID: 37670066 PMCID: PMC10480174 DOI: 10.1038/s41598-023-41362-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/25/2023] [Indexed: 09/07/2023] Open
Abstract
A new system based on binary Deep Learning (DL) convolutional neural networks has been developed to recognize specific retinal abnormality signs on Optical Coherence Tomography (OCT) images useful for clinical practice. Images from the local hospital database were retrospectively selected from 2017 to 2022. Images were labeled by two retinal specialists and included central fovea cross-section OCTs. Nine models were developed using the Visual Geometry Group 16 architecture to distinguish healthy versus abnormal retinas and to identify eight different retinal abnormality signs. A total of 21,500 OCT images were screened, and 10,770 central fovea cross-section OCTs were included in the study. The system achieved high accuracy in identifying healthy retinas and specific pathological signs, ranging from 93 to 99%. Accurately detecting abnormal retinal signs from OCT images is crucial for patient care. This study aimed to identify specific signs related to retinal pathologies, aiding ophthalmologists in diagnosis. The high-accuracy system identified healthy retinas and pathological signs, making it a useful diagnostic aid. Labelled OCT images remain a challenge, but our approach reduces dataset creation time and shows DL models' potential to improve ocular pathology diagnosis and clinical decision-making.
Collapse
Affiliation(s)
- Inferrera Leandro
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy.
| | - Borsatti Lorenzo
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | | | - Marangoni Dario
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Giglio Rosa
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| | - Accardo Agostino
- Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Tognetto Daniele
- Department of Medicine, Surgery and Health Sciences, Eye Clinic, Ophthalmology Clinic, University of Trieste, Piazza Dell'Ospitale 1, 34125, Trieste, Italy
| |
Collapse
|
12
|
Anderson M, Sadiq S, Nahaboo Solim M, Barker H, Steel DH, Habib M, Obara B. Biomedical Data Annotation: An OCT Imaging Case Study. J Ophthalmol 2023; 2023:5747010. [PMID: 37650051 PMCID: PMC10465257 DOI: 10.1155/2023/5747010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 07/20/2023] [Accepted: 08/02/2023] [Indexed: 09/01/2023] Open
Abstract
In ophthalmology, optical coherence tomography (OCT) is a widely used imaging modality, allowing visualisation of the structures of the eye with objective and quantitative cross-sectional three-dimensional (3D) volumetric scans. Due to the quantity of data generated from OCT scans and the time taken for an ophthalmologist to inspect for various disease pathology features, automated image analysis in the form of deep neural networks has seen success for the classification and segmentation of OCT layers and quantification of features. However, existing high-performance deep learning approaches rely on huge training datasets with high-quality annotations, which are challenging to obtain in many clinical applications. The collection of annotations from less experienced clinicians has the potential to alleviate time constraints from more senior clinicians, allowing faster data collection of medical image annotations; however, with less experience, there is the possibility of reduced annotation quality. In this study, we evaluate the quality of diabetic macular edema (DME) intraretinal fluid (IRF) biomarker image annotations on OCT B-scans from five clinicians with a range of experience. We also assess the effectiveness of annotating across multiple sessions following a training session led by an expert clinician. Our investigation shows a notable variance in annotation performance, with a correlation that depends on the clinician's experience with OCT image interpretation of DME, and that having multiple annotation sessions has a limited effect on the annotation quality.
Collapse
Affiliation(s)
- Matthew Anderson
- School of Computing, Newcastle University, Urban Sciences Building, Newcastle upon Tyne NE4 5TG, UK
| | - Salman Sadiq
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
| | | | - Hannah Barker
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
| | - David H. Steel
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| | - Maged Habib
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Urban Sciences Building, Newcastle upon Tyne NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
13
|
Vali M, Nazari B, Sadri S, Pour EK, Riazi-Esfahani H, Faghihi H, Ebrahimiadib N, Azizkhani M, Innes W, Steel DH, Hurlbert A, Read JCA, Kafieh R. CNV-Net: Segmentation, Classification and Activity Score Measurement of Choroidal Neovascularization (CNV) Using Optical Coherence Tomography Angiography (OCTA). Diagnostics (Basel) 2023; 13:diagnostics13071309. [PMID: 37046527 PMCID: PMC10093691 DOI: 10.3390/diagnostics13071309] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/23/2023] [Accepted: 03/24/2023] [Indexed: 04/03/2023] Open
Abstract
This paper aims to present an artificial intelligence-based algorithm for the automated segmentation of Choroidal Neovascularization (CNV) areas and to identify the presence or absence of CNV activity criteria (branching, peripheral arcade, dark halo, shape, loop and anastomoses) in OCTA images. Methods: This retrospective and cross-sectional study includes 130 OCTA images from 101 patients with treatment-naïve CNV. At baseline, OCTA volumes of 6 × 6 mm2 were obtained to develop an AI-based algorithm to evaluate the CNV activity based on five activity criteria, including tiny branching vessels, anastomoses and loops, peripheral arcades, and perilesional hypointense halos. The proposed algorithm comprises two steps. The first block includes the pre-processing and segmentation of CNVs in OCTA images using a modified U-Net network. The second block consists of five binary classification networks, each implemented with various models from scratch, and using transfer learning from pre-trained networks. Results: The proposed segmentation network yielded an averaged Dice coefficient of 0.86. The individual classifiers corresponding to the five activity criteria (branch, peripheral arcade, dark halo, shape, loop, and anastomoses) showed accuracies of 0.84, 0.81, 0.86, 0.85, and 0.82, respectively. The AI-based algorithm potentially allows the reliable detection and segmentation of CNV from OCTA alone, without the need for imaging with contrast agents. The evaluation of the activity criteria in CNV lesions obtains acceptable results, and this algorithm could enable the objective, repeatable assessment of CNV features.
Collapse
|
14
|
TSSK-Net: Weakly supervised biomarker localization and segmentation with image-level annotation in retinal OCT images. Comput Biol Med 2023; 153:106467. [PMID: 36584602 DOI: 10.1016/j.compbiomed.2022.106467] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
The localization and segmentation of biomarkers in OCT images are critical steps in retina-related disease diagnosis. Although fully supervised deep learning models can segment pathological regions, their performance relies on labor-intensive pixel-level annotations. Compared with dense pixel-level annotation, image-level annotation can reduce the burden of manual annotation. Existing methods for image-level annotation are usually based on class activation maps (CAM). However, current methods still suffer from model collapse, training instability, and anatomical mismatch due to the considerable variation in retinal biomarkers' shape, texture, and size. This paper proposes a novel weakly supervised biomarkers localization and segmentation method, requiring only image-level annotations. The technique is a Teacher-Student network with joint Self-supervised contrastive learning and Knowledge distillation-based anomaly localization, namely TSSK-Net. Specifically, we treat retinal biomarker regions as abnormal regions distinct from normal regions. First, we propose a novel pre-training strategy based on supervised contrastive learning that encourages the model to learn the anatomical structure of normal OCT images. Second, we design a fine-tuning module and propose a novel hybrid network structure. The network includes supervised contrastive loss for feature learning and cross-entropy loss for classification learning. To further improve the performance, we propose an efficient strategy to combine these two losses to preserve the anatomical structure and enhance the encoding representation of features. Finally, we design a knowledge distillation-based anomaly segmentation method that is effectively combined with the previous model to alleviate the challenge of insufficient supervision. Experimental results on a local dataset and a public dataset demonstrated the effectiveness of our proposed method. Our proposed method can effectively reduce the annotation burden of ophthalmologists in OCT images.
Collapse
|
15
|
Weakly-supervised localization and classification of biomarkers in OCT images with integrated reconstruction and attention. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
16
|
Pavithra K, Kumar P, Geetha M, Bhandary SV. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
17
|
Holz FG, Abreu-Gonzalez R, Bandello F, Duval R, O'Toole L, Pauleikhoff D, Staurenghi G, Wolf A, Lorand D, Clemens A, Gmeiner B. Does real-time artificial intelligence-based visual pathology enhancement of three-dimensional optical coherence tomography scans optimise treatment decision in patients with nAMD? Rationale and design of the RAZORBILL study. Br J Ophthalmol 2023; 107:96-101. [PMID: 34362776 PMCID: PMC9763175 DOI: 10.1136/bjophthalmol-2021-319211] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 07/23/2021] [Indexed: 11/04/2022]
Abstract
BACKGROUND/RATIONALE Artificial intelligence (AI)-based clinical decision support tools, being developed across multiple fields in medicine, need to be evaluated for their impact on the treatment and outcomes of patients as well as optimisation of the clinical workflow. The RAZORBILL study will investigate the impact of advanced AI segmentation algorithms on the disease activity assessment in patients with neovascular age-related macular degeneration (nAMD) by enriching three-dimensional (3D) retinal optical coherence tomography (OCT) scans with automated fluid and layer quantification measurements. METHODS RAZORBILL is an observational, multicentre, multinational, open-label study, comprising two phases: (a) clinical data collection (phase I): an observational study design, which enforces neither strict visit schedule nor mandated treatment regimen was chosen as an appropriate design to collect data in a real-world clinical setting to enable evaluation in phase II and (b) OCT enrichment analysis (phase II): de-identified 3D OCT scans will be evaluated for disease activity. Within this evaluation, investigators will review the scans once enriched with segmentation results (i.e., highlighted and quantified pathological fluid volumes) and once in its original (i.e., non-enriched) state. This review will be performed using an integrated crossover design, where investigators are used as their own controls allowing the analysis to account for differences in expertise and individual disease activity definitions. CONCLUSIONS In order to apply novel AI tools to routine clinical care, their benefit as well as operational feasibility need to be carefully investigated. RAZORBILL will inform on the value of AI-based clinical decision support tools. It will clarify if these can be implemented in clinical treatment of patients with nAMD and whether it allows for optimisation of individualised treatment in routine clinical care.
Collapse
Affiliation(s)
- Frank G Holz
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Rodrigo Abreu-Gonzalez
- Department of Ophthalmology, University Hospital of La Candelaria, Santa Cruz de Tenerife, Spain
| | - Francesco Bandello
- Department of Ophthalmology, University Vita-Salute, Scientific Institute San Raffaele, University Vita Salute Hospital San Raffaele, Milano, Italy
| | - Renaud Duval
- Department of Ophthalmology, Maisonneuve-Rosemont Hospital Research Centre, University of Montreal, Montreal, Quebec, Canada
| | - Louise O'Toole
- Department of Ophthalmology, Bon Secours Hospital Dublin, Dublin, Ireland
| | | | - Giovanni Staurenghi
- Dipartimento di Scienze Cliniche Luigi Sacco, Eye Clinic, University of Milan, Milan, Italy
| | - Armin Wolf
- Department of Ophthalmology, University of Ulm, Ulm, Germany
| | | | - Andreas Clemens
- Novartis Pharma AG, Basel, Switzerland,Department of Cardiology and Angiology I, Heart Center Freiburg University, Faculty of Medicine, University of Freiburg, Freiburg im Breisgau, Germany
| | | |
Collapse
|
18
|
Liu X, Zhou K, Yao J, Wang M, Zhang Y. Contrastive uncertainty based biomarkers detection in retinal optical coherence tomography images. Phys Med Biol 2022; 67. [PMID: 36384040 DOI: 10.1088/1361-6560/aca376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 11/16/2022] [Indexed: 11/18/2022]
Abstract
Objective.Retinal biomarker in optical coherence tomography (OCT) images plays a key guiding role in the follow-up diagnosis and clinical treatment of eye diseases. Although there have been many deep learning methods to automatically process retinal biomarker, the detection of retinal biomarkers is still a great challenge due to the similar characteristics to normal tissue, large changes in size and shape and fuzzy boundary of different types of biomarkers. To overcome these challenges, a novel contrastive uncertainty network (CUNet) is proposed for retinal biomarkers detection in OCT images.Approach.In CUNet, proposal contrastive learning is designed to enhance the feature representation of retinal biomarkers, aiming at boosting the discrimination ability of network between different types of retinal biomarkers. Furthermore, we proposed bounding box uncertainty and combined it with the traditional bounding box regression, thereby improving the sensitivity of the network to the fuzzy boundaries of retinal biomarkers, and to obtain a better localization result.Main results.Comprehensive experiments are conducted to evaluate the performance of the proposed CUNet. The experimental results on two datasets show that our proposed method achieves good detection performance compared with other detection methods.Significance.We propose a method for retinal biomarker detection trained by bounding box labels. The proposal contrastive learning and bounding box uncertainty are used to improve the detection of retinal biomarkers. The method is designed to help reduce the amount of work doctors have to do to detect retinal diseases.
Collapse
Affiliation(s)
- Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, People's Republic of China.,Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, 430065, People's Republic of China
| | - Kejie Zhou
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, People's Republic of China.,Hubei Province Key Laboratory of Intelligent Information Processing and Real-time Industrial System, Wuhan, 430065, People's Republic of China
| | - Junping Yao
- Department of Ophthalmology, Tianyou Hospital Affiliated to Wuhan University of Science and Technology, Wuhan, People's Republic of China
| | - Man Wang
- Wuhan Aier Eye Hospital of Wuhan University, Wuhan, People's Republic of China
| | - Ying Zhang
- Wuhan Aier Eye Hospital of Wuhan University, Wuhan, People's Republic of China
| |
Collapse
|
19
|
Andresen J, Kepp T, Ehrhardt J, Burchard CVD, Roider J, Handels H. Deep learning-based simultaneous registration and unsupervised non-correspondence segmentation of medical images with pathologies. Int J Comput Assist Radiol Surg 2022; 17:699-710. [PMID: 35239133 PMCID: PMC8948150 DOI: 10.1007/s11548-022-02577-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 02/03/2022] [Indexed: 12/02/2022]
Abstract
Purpose The registration of medical images often suffers from missing correspondences due to inter-patient variations, pathologies and their progression leading to implausible deformations that cause misregistrations and might eliminate valuable information. Detecting non-corresponding regions simultaneously with the registration process helps generating better deformations and has been investigated thoroughly with classical iterative frameworks but rarely with deep learning-based methods. Methods We present the joint non-correspondence segmentation and image registration network (NCR-Net), a convolutional neural network (CNN) trained on a Mumford–Shah-like functional, transferring the classical approach to the field of deep learning. NCR-Net consists of one encoding and two decoding parts allowing the network to simultaneously generate diffeomorphic deformations and segment non-correspondences. The loss function is composed of a masked image distance measure and regularization of deformation field and segmentation output. Additionally, anatomical labels are used for weak supervision of the registration task. No manual segmentations of non-correspondences are required. Results The proposed network is evaluated on the publicly available LPBA40 dataset with artificially added stroke lesions and a longitudinal optical coherence tomography (OCT) dataset of patients with age-related macular degeneration. The LPBA40 data are used to quantitatively assess the segmentation performance of the network, and it is shown qualitatively that NCR-Net can be used for the unsupervised segmentation of pathologies in OCT images. Furthermore, NCR-Net is compared to a registration-only network and state-of-the-art registration algorithms showing that NCR-Net achieves competitive performance and superior robustness to non-correspondences. Conclusion NCR-Net, a CNN for simultaneous image registration and unsupervised non-correspondence segmentation, is presented. Experimental results show the network’s ability to segment non-correspondence regions in an unsupervised manner and its robust registration performance even in the presence of large pathologies.
Collapse
Affiliation(s)
- Julia Andresen
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| | - Timo Kepp
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Jan Ehrhardt
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- German Research Center for Artificial Intelligence, Lübeck, Germany
| | | | - Johann Roider
- Department of Ophthalmology, Christian-Albrechts-University of Kiel, Kiel, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- German Research Center for Artificial Intelligence, Lübeck, Germany
| |
Collapse
|
20
|
Gutfleisch M, Ester O, Aydin S, Quassowski M, Spital G, Lommatzsch A, Rothaus K, Dubis AM, Pauleikhoff D. Clinically applicable deep learning-based decision aids for treatment of neovascular AMD. Graefes Arch Clin Exp Ophthalmol 2022; 260:2217-2230. [DOI: 10.1007/s00417-022-05565-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 01/22/2023] Open
|
21
|
Clinical Validation of Saliency Maps for Understanding Deep Neural Networks in Ophthalmology. Med Image Anal 2022; 77:102364. [DOI: 10.1016/j.media.2022.102364] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 11/02/2021] [Accepted: 01/10/2022] [Indexed: 01/17/2023]
|
22
|
Kalra G, Kar SS, Sevgi DD, Madabhushi A, Srivastava SK, Ehlers JP. Quantitative Imaging Biomarkers in Age-Related Macular Degeneration and Diabetic Eye Disease: A Step Closer to Precision Medicine. J Pers Med 2021; 11:1161. [PMID: 34834513 PMCID: PMC8622761 DOI: 10.3390/jpm11111161] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/31/2021] [Accepted: 11/04/2021] [Indexed: 01/21/2023] Open
Abstract
The management of retinal diseases relies heavily on digital imaging data, including optical coherence tomography (OCT) and fluorescein angiography (FA). Targeted feature extraction and the objective quantification of features provide important opportunities in biomarker discovery, disease burden assessment, and predicting treatment response. Additional important advantages include increased objectivity in interpretation, longitudinal tracking, and ability to incorporate computational models to create automated diagnostic and clinical decision support systems. Advances in computational technology, including deep learning and radiomics, open new doors for developing an imaging phenotype that may provide in-depth personalized disease characterization and enhance opportunities in precision medicine. In this review, we summarize current quantitative and radiomic imaging biomarkers described in the literature for age-related macular degeneration and diabetic eye disease using imaging modalities such as OCT, FA, and OCT angiography (OCTA). Various approaches used to identify and extract these biomarkers that utilize artificial intelligence and deep learning are also summarized in this review. These quantifiable biomarkers and automated approaches have unleashed new frontiers of personalized medicine where treatments are tailored, based on patient-specific longitudinally trackable biomarkers, and response monitoring can be achieved with a high degree of accuracy.
Collapse
Affiliation(s)
- Gagan Kalra
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA; (G.K.); (D.D.S.); (S.K.S.)
- Tony and Leona Campane Center for Excellence in Image-Guided Surgery & Advanced, Cleveland Clinic, Cleveland, OH 44195, USA;
| | - Sudeshna Sil Kar
- Tony and Leona Campane Center for Excellence in Image-Guided Surgery & Advanced, Cleveland Clinic, Cleveland, OH 44195, USA;
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA;
| | - Duriye Damla Sevgi
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA; (G.K.); (D.D.S.); (S.K.S.)
- Tony and Leona Campane Center for Excellence in Image-Guided Surgery & Advanced, Cleveland Clinic, Cleveland, OH 44195, USA;
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA;
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH 44106, USA
| | - Sunil K. Srivastava
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA; (G.K.); (D.D.S.); (S.K.S.)
- Tony and Leona Campane Center for Excellence in Image-Guided Surgery & Advanced, Cleveland Clinic, Cleveland, OH 44195, USA;
| | - Justis P. Ehlers
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH 44195, USA; (G.K.); (D.D.S.); (S.K.S.)
- Tony and Leona Campane Center for Excellence in Image-Guided Surgery & Advanced, Cleveland Clinic, Cleveland, OH 44195, USA;
| |
Collapse
|
23
|
Mantel I, Mosinska A, Bergin C, Polito MS, Guidotti J, Apostolopoulos S, Ciller C, De Zanet S. Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning. Transl Vis Sci Technol 2021; 10:17. [PMID: 34003996 PMCID: PMC8083067 DOI: 10.1167/tvst.10.4.17] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Purpose To develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach. Methods One hundred seven spectral domain optical coherence tomography (OCT) cube volumes were extracted from nAMD eyes. Manual annotation of IRF, SRF, and PED was performed. Ninety-two OCT volumes served as training and validation set, and 15 OCT volumes from different patients as test set. The performance of our fluid segmentation method was quantified by means of pixel-wise metrics and volume correlations and compared to other methods. Repeatability was tested on 42 other eyes with five OCT volume scans acquired on the same day. Results The fully automated algorithm achieved good performance for the detection of IRF, SRF, and PED. The area under the curve for detection, sensitivity, and specificity was 0.97, 0.95, and 0.99, respectively. The correlation coefficients for the fluid volumes were 0.99, 0.99, and 0.91, respectively. The Dice score was 0.73, 0.67, and 0.82, respectively. For the largest volume quartiles the Dice scores were >0.90. Including retinal layer segmentation contributed positively to the performance. The repeatability of volume prediction showed a standard deviations of 4.0 nL, 3.5 nL, and 20.0 nL for IRF, SRF, and PED, respectively. Conclusions The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance. Translational Relevance Potential applications include measurements of specific fluid compartments with high reproducibility, assistance in treatment decisions, and the diagnostic or scientific evaluation of relevant subgroups.
Collapse
Affiliation(s)
- Irmela Mantel
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Ciara Bergin
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Maria Sole Polito
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jacopo Guidotti
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | | |
Collapse
|
24
|
Schmidt-Erfurth U, Reiter GS, Riedl S, Seeböck P, Vogl WD, Blodi BA, Domalpally A, Fawzi A, Jia Y, Sarraf D, Bogunović H. AI-based monitoring of retinal fluid in disease activity and under therapy. Prog Retin Eye Res 2021; 86:100972. [PMID: 34166808 DOI: 10.1016/j.preteyeres.2021.100972] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/11/2021] [Accepted: 05/13/2021] [Indexed: 12/21/2022]
Abstract
Retinal fluid as the major biomarker in exudative macular disease is accurately visualized by high-resolution three-dimensional optical coherence tomography (OCT), which is used world-wide as a diagnostic gold standard largely replacing clinical examination. Artificial intelligence (AI) with its capability to objectively identify, localize and quantify fluid introduces fully automated tools into OCT imaging for personalized disease management. Deep learning performance has already proven superior to human experts, including physicians and certified readers, in terms of accuracy and speed. Reproducible measurement of retinal fluid relies on precise AI-based segmentation methods that assign a label to each OCT voxel denoting its fluid type such as intraretinal fluid (IRF) and subretinal fluid (SRF) or pigment epithelial detachment (PED) and its location within the central 1-, 3- and 6-mm macular area. Such reliable analysis is most relevant to reflect differences in pathophysiological mechanisms and impacts on retinal function, and the dynamics of fluid resolution during therapy with different regimens and substances. Yet, an in-depth understanding of the mode of action of supervised and unsupervised learning, the functionality of a convolutional neural net (CNN) and various network architectures is needed. Greater insight regarding adequate methods for performance, validation assessment, and device- and scanning-pattern-dependent variations is necessary to empower ophthalmologists to become qualified AI users. Fluid/function correlation can lead to a better definition of valid fluid variables relevant for optimal outcomes on an individual and a population level. AI-based fluid analysis opens the way for precision medicine in real-world practice of the leading retinal diseases of modern times.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Gregor S Reiter
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Sophie Riedl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Philipp Seeböck
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Wolf-Dieter Vogl
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Barbara A Blodi
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amitha Domalpally
- Fundus Photograph Reading Center, Department of Ophthalmology and Visual Sciences, University of Wisconsin, Madison, WI, USA.
| | - Amani Fawzi
- Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
| | - Yali Jia
- Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, OR, USA.
| | - David Sarraf
- Stein Eye Institute, University of California Los Angeles, Los Angeles, CA, USA.
| | - Hrvoje Bogunović
- Department of Ophthalmology Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| |
Collapse
|
25
|
Szeskin A, Yehuda R, Shmueli O, Levy J, Joskowicz L. A column-based deep learning method for the detection and quantification of atrophy associated with AMD in OCT scans. Med Image Anal 2021; 72:102130. [PMID: 34198041 DOI: 10.1016/j.media.2021.102130] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 05/27/2021] [Accepted: 06/03/2021] [Indexed: 10/21/2022]
Abstract
The objective quantification of retinal atrophy associated with age-related macular degeneration (AMD) is required for clinical diagnosis, follow-up, treatment efficacy evaluation, and clinical research. Spectral Domain Optical Coherence Tomography (OCT) has become an essential imaging technology to evaluate the macula. This paper describes a novel automatic method for the identification and quantification of atrophy associated with AMD in OCT scans and its visualization in the corresponding infrared imaging (IR) image. The method is based on the classification of light scattering patterns in vertical pixel-wide columns (A-scans) in OCT slices (B-scans) in which atrophy appears with a custom column-based convolutional neural network (CNN). The network classifies individual columns with 3D column patches formed by adjacent neighboring columns from the volumetric OCT scan. Subsequent atrophy columns form atrophy segments which are then projected onto the IR image and are used to identify and segment atrophy lesions in the IR image and to measure their areas and distances from the fovea. Experimental results on 106 clinical OCT scans (5,207 slices) in which cRORA atrophy (the end point of advanced dry AMD) was identified in 2,952 atrophy segments and 1,046 atrophy lesions yield a mean F1 score of 0.78 (std 0.06) and an AUC of 0.937, both close to the observer variability. Automated computer-based detection and quantification of atrophy associated with AMD using a column-based CNN classification in OCT scans can be performed at expert level and may be a useful clinical decision support and research tool for the diagnosis, follow-up and treatment of retinal degenerations and dystrophies.
Collapse
Affiliation(s)
- Adi Szeskin
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
| | - Roei Yehuda
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
| | - Or Shmueli
- Department of Ophthalmology, Hadassah Medical Center, Jerusalem, Israel
| | - Jaime Levy
- Department of Ophthalmology, Hadassah Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel.
| |
Collapse
|
26
|
Hsiao T, Ho Y, Chen M, Lee S, Sun C. Disease activation maps for subgingival dental calculus identification based on intelligent dental optical coherence tomography. TRANSLATIONAL BIOPHOTONICS 2021. [DOI: 10.1002/tbio.202100001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Affiliation(s)
- Tien‐Yu Hsiao
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering National Yang Ming Chiao Tung University Hsinchu City Taiwan, ROC
| | - Yi‐Ching Ho
- School of Dentistry National Yang Ming Chiao Tung University Taipei Taiwan, ROC
- Department of Stomatology Taipei Veterans General Hospital Taipei Taiwan, ROC
| | - Mei‐Ru Chen
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering National Yang Ming Chiao Tung University Hsinchu City Taiwan, ROC
| | - Shyh‐Yuan Lee
- School of Dentistry National Yang Ming Chiao Tung University Taipei Taiwan, ROC
- Department of Stomatology Taipei Veterans General Hospital Taipei Taiwan, ROC
- Department of Dentistry Yangming Branch of Taipei City Hospital Taipei Taiwan, ROC
| | - Chia‐Wei Sun
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering National Yang Ming Chiao Tung University Hsinchu City Taiwan, ROC
| |
Collapse
|
27
|
Gallardo M, Munk MR, Kurmann T, De Zanet S, Mosinska A, Karagoz IK, Zinkernagel MS, Wolf S, Sznitman R. Machine learning can predict anti-VEGF treatment demand in a Treat-and-Extend regimen for patients with nAMD, DME and RVO associated ME. ACTA ACUST UNITED AC 2021; 5:604-624. [DOI: 10.1016/j.oret.2021.05.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 01/27/2023]
|
28
|
Assessment of patient specific information in the wild on fundus photography and optical coherence tomography. Sci Rep 2021; 11:8621. [PMID: 33883573 PMCID: PMC8060417 DOI: 10.1038/s41598-021-86577-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 03/09/2021] [Indexed: 12/25/2022] Open
Abstract
In this paper we analyse the performance of machine learning methods in predicting patient information such as age or sex solely from retinal imaging modalities in a heterogeneous clinical population. Our dataset consists of N = 135,667 fundus images and N = 85,536 volumetric OCT scans. Deep learning models were trained to predict the patient’s age and sex from fundus images, OCT cross sections and OCT volumes. For sex prediction, a ROC AUC of 0.80 was achieved for fundus images, 0.84 for OCT cross sections and 0.90 for OCT volumes. Age prediction mean absolute errors of 6.328 years for fundus, 5.625 years for OCT cross sections and 4.541 for OCT volumes were observed. We assess the performance of OCT scans containing different biomarkers and note a peak performance of AUC = 0.88 for OCT cross sections and 0.95 for volumes when there is no pathology on scans. Performance drops in case of drusen, fibrovascular pigment epitheliuum detachment and geographic atrophy present. We conclude that deep learning based methods are capable of classifying the patient’s sex and age from color fundus photography and OCT for a broad spectrum of patients irrespective of underlying disease or image quality. Non-random sex prediction using fundus images seems only possible if the eye fovea and optic disc are visible.
Collapse
|
29
|
Apostolopoulos S, Salas J, Ordóñez JLP, Tan SS, Ciller C, Ebneter A, Zinkernagel M, Sznitman R, Wolf S, De Zanet S, Munk MR. Automatically Enhanced OCT Scans of the Retina: A proof of concept study. Sci Rep 2020; 10:7819. [PMID: 32385371 PMCID: PMC7210925 DOI: 10.1038/s41598-020-64724-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 04/15/2020] [Indexed: 11/18/2022] Open
Abstract
In this work we evaluated a postprocessing, customized automatic retinal OCT B-scan enhancement software for noise reduction, contrast enhancement and improved depth quality applicable to Heidelberg Engineering Spectralis OCT devices. A trained deep neural network was used to process images from an OCT dataset with ground truth biomarker gradings. Performance was assessed by the evaluation of two expert graders who evaluated image quality for B-scan with a clear preference for enhanced over original images. Objective measures such as SNR and noise estimation showed a significant improvement in quality. Presence grading of seven biomarkers IRF, SRF, ERM, Drusen, RPD, GA and iRORA resulted in similar intergrader agreement. Intergrader agreement was also compared with improvement in IRF and RPD, and disagreement in high variance biomarkers such as GA and iRORA.
Collapse
Affiliation(s)
| | - Jazmín Salas
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | - José L P Ordóñez
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | | | - Andreas Ebneter
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | - Martin Zinkernagel
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland
| | | | - Marion R Munk
- Department of Ophthalmology, Inselspital, University Hospital, University of Bern, Bern, Switzerland.
| |
Collapse
|
30
|
Berens P, Waldstein SM, Ayhan MS, Kümmerle L, Agostini H, Stahl A, Ziemssen F. Potenzial von Methoden der künstlichen Intelligenz für die Qualitätssicherung. Ophthalmologe 2020; 117:320-325. [DOI: 10.1007/s00347-020-01063-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
31
|
Yu S, Rückert R, Munk MR. Treat-and-extend regimens with anti-vascular endothelial growth factor agents in age-related macular degeneration. EXPERT REVIEW OF OPHTHALMOLOGY 2019. [DOI: 10.1080/17469899.2019.1698948] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Siqing Yu
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | | | - Marion R. Munk
- Department of Ophthalmology and Department of Clinical Research, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|