1
|
Bell J, Whitney J, Cetin H, Le T, Cardwell N, Srivasatava SK, Ehlers JP. Validation of Inter-Reader Agreement/Consistency for Quantification of Ellipsoid Zone Integrity and Sub-RPE Compartmental Features Across Retinal Diseases. Diagnostics (Basel) 2024; 14:2395. [PMID: 39518363 PMCID: PMC11545794 DOI: 10.3390/diagnostics14212395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Revised: 09/24/2024] [Accepted: 10/07/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND An unmet need exists when clinically assessing retinal and layer-based features of retinal diseases. Therefore, quantification of retinal-layer-thicknesses/fluid volumes using deep-learning-augmented platforms to reproduce human-obtained clinical measurements is needed. METHODS In this analysis, 210 spectral-domain optical coherence tomography (SD-OCT) scans (30 without pathology, 60 dry age-related macular degeneration [AMD], 60 wet AMD, and 60 diabetic macular edema [total 23,625 B-scans]) were included. A fully automated segmentation platform segmented four retinal layers for compartmental assessment (internal limiting membrane, ellipsoid zone [EZ], retinal pigment epithelium [RPE], and Bruch's membrane). Two certified OCT readers independently completed manual segmentation and B-scan level validation of automated segmentation, with segmentation correction when needed (semi-automated). Certified reader metrics were compared to gold standard metrics using intraclass correlation coefficients (ICCs) to assess overall agreement. Across different diseases, several metrics generated from automated segmentations approached or matched human readers performance. RESULTS Absolute ICCs for retinal mean thickness measurements showed excellent agreement (range 0.980-0.999) across four cohorts. EZ-RPE thickness values and sub-RPE compartment ICCs demonstrated excellent agreement (ranges of 0.953-0.987 and 0.944-0.997, respectively) for full dataset, dry-AMD, and wet-AMD cohorts. CONCLUSIONS Analyses demonstrated high reliability and consistency of segmentation of outer retinal compartmental features using a completely human/manual approach or a semi-automated approach to segmentation. These results support the critical role that measuring features, such as photoreceptor preservation through EZ integrity, in future clinical trials may optimize clinical care.
Collapse
Affiliation(s)
- Jordan Bell
- Cleveland Clinic Lerner College of Medicine Program, Case Western Reserve University, Cleveland, OH 44106, USA
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Jon Whitney
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Hasan Cetin
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Thuy Le
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Nicole Cardwell
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
| | - Sunil K. Srivasatava
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
- Vitreoretinal Service, Cole Eye Institute, Cleveland, OH 44195, USA
| | - Justis P. Ehlers
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH 44195, USA
- Vitreoretinal Service, Cole Eye Institute, Cleveland, OH 44195, USA
| |
Collapse
|
2
|
Tonti E, Tonti S, Mancini F, Bonini C, Spadea L, D’Esposito F, Gagliano C, Musa M, Zeppieri M. Artificial Intelligence and Advanced Technology in Glaucoma: A Review. J Pers Med 2024; 14:1062. [PMID: 39452568 PMCID: PMC11508556 DOI: 10.3390/jpm14101062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Revised: 09/29/2024] [Accepted: 10/10/2024] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND Glaucoma is a leading cause of irreversible blindness worldwide, necessitating precise management strategies tailored to individual patient characteristics. Artificial intelligence (AI) holds promise in revolutionizing the approach to glaucoma care by providing personalized interventions. AIM This review explores the current landscape of AI applications in the personalized management of glaucoma patients, highlighting advancements, challenges, and future directions. METHODS A systematic search of electronic databases, including PubMed, Scopus, and Web of Science, was conducted to identify relevant studies published up to 2024. Studies exploring the use of AI techniques in personalized management strategies for glaucoma patients were included. RESULTS The review identified diverse AI applications in glaucoma management, ranging from early detection and diagnosis to treatment optimization and prognosis prediction. Machine learning algorithms, particularly deep learning models, demonstrated high accuracy in diagnosing glaucoma from various imaging modalities such as optical coherence tomography (OCT) and visual field tests. AI-driven risk stratification tools facilitated personalized treatment decisions by integrating patient-specific data with predictive analytics, enhancing therapeutic outcomes while minimizing adverse effects. Moreover, AI-based teleophthalmology platforms enabled remote monitoring and timely intervention, improving patient access to specialized care. CONCLUSIONS Integrating AI technologies in the personalized management of glaucoma patients holds immense potential for optimizing clinical decision-making, enhancing treatment efficacy, and mitigating disease progression. However, challenges such as data heterogeneity, model interpretability, and regulatory concerns warrant further investigation. Future research should focus on refining AI algorithms, validating their clinical utility through large-scale prospective studies, and ensuring seamless integration into routine clinical practice to realize the full benefits of personalized glaucoma care.
Collapse
Affiliation(s)
- Emanuele Tonti
- UOC Ophthalmology, Sant’Eugenio Hospital, 00144 Rome, Italy;
| | - Sofia Tonti
- Biomedical Engineering, Politecnico di Torino, 10129 Turin, Italy
| | - Flavia Mancini
- Eye Clinic, Policlinico Umberto I University Hospital, 00142 Rome, Italy
| | - Chiara Bonini
- Eye Clinic, Policlinico Umberto I University Hospital, 00142 Rome, Italy
| | - Leopoldo Spadea
- Eye Clinic, Policlinico Umberto I University Hospital, 00142 Rome, Italy
| | - Fabiana D’Esposito
- Imperial College Ophthalmic Research Group (ICORG) Unit, Imperial College, 153-173 Marylebone Rd, London NW15QH, UK
- Department of Neurosciences, Reproductive Sciences and Dentistry, University of Naples Federico II, Via Pansini 5, 80131 Napoli, Italy
| | - Caterina Gagliano
- Department of Medicine and Surgery, University of Enna “Kore”, Piazza dell’Università, 94100 Enna, Italy
- “G.B. Morgagni” Mediterranean Foundation, 95125 Catania, Italy
| | - Mutali Musa
- Department of Optometry, University of Benin, Benin 300238, Nigeria
| | - Marco Zeppieri
- Department of Ophthalmology, University Hospital of Udine, 33100 Udine, Italy
| |
Collapse
|
3
|
Verma PK, Kaur J. Systematic Review of Retinal Blood Vessels Segmentation Based on AI-driven Technique. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1783-1799. [PMID: 38438695 PMCID: PMC11300804 DOI: 10.1007/s10278-024-01010-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 03/06/2024]
Abstract
Image segmentation is a crucial task in computer vision and image processing, with numerous segmentation algorithms being found in the literature. It has important applications in scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, image compression, among others. In light of this, the widespread popularity of deep learning (DL) and machine learning has inspired the creation of fresh methods for segmenting images using DL and ML models respectively. We offer a thorough analysis of this recent literature, encompassing the range of ground-breaking initiatives in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid-based methods, recurrent networks, visual attention models, and generative models in adversarial settings. We study the connections, benefits, and importance of various DL- and ML-based segmentation models; look at the most popular datasets; and evaluate results in this Literature.
Collapse
Affiliation(s)
- Prem Kumari Verma
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India.
| | - Jagdeep Kaur
- Department of Computer Science and Engineering, Dr. B.R. Ambedkar National Institute of Technology, Jalandhar, 144008, Punjab, India
| |
Collapse
|
4
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
5
|
Roubelat FP, Soler V, Varenne F, Gualino V. Real-world artificial intelligence-based interpretation of fundus imaging as part of an eyewear prescription renewal protocol. J Fr Ophtalmol 2024; 47:104130. [PMID: 38461084 DOI: 10.1016/j.jfo.2024.104130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 03/11/2024]
Abstract
OBJECTIVE A real-world evaluation of the diagnostic accuracy of the Opthai® software for artificial intelligence-based detection of fundus image abnormalities in the context of the French eyewear prescription renewal protocol (RNO). METHODS A single-center, retrospective review of the sensitivity and specificity of the software in detecting fundus abnormalities among consecutive patients seen in our ophthalmology center in the context of the RNO protocol from July 28 through October 22, 2021. We compared abnormalities detected by the software operated by ophthalmic technicians (index test) to diagnoses confirmed by the ophthalmologist following additional examinations and/or consultation (reference test). RESULTS The study included 2056 eyes/fundus images of 1028 patients aged 6-50years. The software detected fundus abnormalities in 149 (7.2%) eyes or 107 (10.4%) patients. After examining the same fundus images, the ophthalmologist detected abnormalities in 35 (1.7%) eyes or 20 (1.9%) patients. The ophthalmologist did not detect abnormalities in fundus images deemed normal by the software. The most frequent diagnoses made by the ophthalmologist were glaucoma suspect (0.5% of eyes), peripapillary atrophy (0.44% of eyes), and drusen (0.39% of eyes). The software showed an overall sensitivity of 100% (95% CI 0.879-1.00) and an overall specificity of 94.4% (95% CI 0.933-0.953). The majority of false-positive software detections (5.6%) were glaucoma suspect, with the differential diagnosis of large physiological optic cups. Immediate OCT imaging by the technician allowed diagnosis by the ophthalmologist without separate consultation for 43/53 (81%) patients. CONCLUSION Ophthalmic technicians can use this software for highly-sensitive screening for fundus abnormalities that require evaluation by an ophthalmologist.
Collapse
Affiliation(s)
- F-P Roubelat
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Soler
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - F Varenne
- Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital, Toulouse, France
| | - V Gualino
- Ophthalmology Department, Clinique Honoré-Cave, Montauban, France.
| |
Collapse
|
6
|
Kulyabin M, Zhdanov A, Nikiforova A, Stepichev A, Kuznetsova A, Ronkin M, Borisov V, Bogachev A, Korotkich S, Constable PA, Maier A. OCTDL: Optical Coherence Tomography Dataset for Image-Based Deep Learning Methods. Sci Data 2024; 11:365. [PMID: 38605088 PMCID: PMC11009408 DOI: 10.1038/s41597-024-03182-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 03/22/2024] [Indexed: 04/13/2024] Open
Abstract
Optical coherence tomography (OCT) is a non-invasive imaging technique with extensive clinical applications in ophthalmology. OCT enables the visualization of the retinal layers, playing a vital role in the early detection and monitoring of retinal diseases. OCT uses the principle of light wave interference to create detailed images of the retinal microstructures, making it a valuable tool for diagnosing ocular conditions. This work presents an open-access OCT dataset (OCTDL) comprising over 2000 OCT images labeled according to disease group and retinal pathology. The dataset consists of OCT records of patients with Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID). The images were acquired with an Optovue Avanti RTVue XR using raster scanning protocols with dynamic scan length and image resolution. Each retinal b-scan was acquired by centering on the fovea and interpreted and cataloged by an experienced retinal specialist. In this work, we applied Deep Learning classification techniques to this new open-access dataset.
Collapse
Affiliation(s)
- Mikhail Kulyabin
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany.
| | - Aleksei Zhdanov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Anastasia Nikiforova
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Andrey Stepichev
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
| | - Anna Kuznetsova
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
| | - Mikhail Ronkin
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Vasilii Borisov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Alexander Bogachev
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Sergey Korotkich
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Paul A Constable
- Flinders University, College of Nursing and Health Sciences, Caring Futures Institute, Adelaide, SA 5042, Australia
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| |
Collapse
|
7
|
Seeböck P, Orlando JI, Michl M, Mai J, Schmidt-Erfurth U, Bogunović H. Anomaly guided segmentation: Introducing semantic context for lesion segmentation in retinal OCT using weak context supervision from anomaly detection. Med Image Anal 2024; 93:103104. [PMID: 38350222 DOI: 10.1016/j.media.2024.103104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/01/2023] [Accepted: 02/05/2024] [Indexed: 02/15/2024]
Abstract
Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.
Collapse
Affiliation(s)
- Philipp Seeböck
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Austria.
| | - José Ignacio Orlando
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Yatiris Group at PLADEMA Institute, CONICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Martin Michl
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Julia Mai
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Hrvoje Bogunović
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria.
| |
Collapse
|
8
|
Suciu CI, Marginean A, Suciu VI, Muntean GA, Nicoară SD. Diabetic Macular Edema Optical Coherence Tomography Biomarkers Detected with EfficientNetV2B1 and ConvNeXt. Diagnostics (Basel) 2023; 14:76. [PMID: 38201384 PMCID: PMC10795694 DOI: 10.3390/diagnostics14010076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/18/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
(1) Background: Diabetes mellitus (DM) is a growing challenge, both for patients and physicians, in order to control the impact on health and prevent complications. Millions of patients with diabetes require medical attention, which generates problems regarding the limited time for screening but also addressability difficulties for consultation and management. As a result, screening programs for vision-threatening complications due to DM have to be more efficient in the future in order to cope with such a great healthcare burden. Diabetic macular edema (DME) is a severe complication of DM that can be prevented if it is timely screened with the help of optical coherence tomography (OCT) devices. Newly developing state-of-the-art artificial intelligence (AI) algorithms can assist physicians in analyzing large datasets and flag potential risks. By using AI algorithms in order to process OCT images of large populations, the screening capacity and speed can be increased so that patients can be timely treated. This quick response gives the physicians a chance to intervene and prevent disability. (2) Methods: This study evaluated ConvNeXt and EfficientNet architectures in correctly identifying DME patterns on real-life OCT images for screening purposes. (3) Results: Firstly, we obtained models that differentiate between diabetic retinopathy (DR) and healthy scans with an accuracy of 0.98. Secondly, we obtained a model that can indicate the presence of edema, detachment of the subfoveolar neurosensory retina, and hyperreflective foci (HF) without using pixel level annotation. Lastly, we analyzed the extent to which the pretrained weights on natural images "understand" OCT scans. (4) Conclusions: Pretrained networks such as ConvNeXt or EfficientNet correctly identify features relevant to the differentiation between healthy retinas and DR, even though they were pretrained on natural images. Another important aspect of our research is that the differentiation between biomarkers and their localization can be obtained even without pixel-level annotation. The "three biomarkers model" is able to identify obvious subfoveal neurosensory detachments, retinal edema, and hyperreflective foci, as well as very small subfoveal detachments. In conclusion, our study points out the possible usefulness of AI-assisted diagnosis of DME for lowering healthcare costs, increasing the quality of life of patients with diabetes, and reducing the waiting time until an appropriate ophthalmological consultation and treatment can be performed.
Collapse
Affiliation(s)
- Corina Iuliana Suciu
- Department of Ophthalmology, “Iuliu Haţieganu” University of Medicine and Pharmacy, 400012 Cluj-Napoca, Romania; (C.I.S.); (G.A.M.); (S.D.N.)
| | - Anca Marginean
- Department of Computer Science, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Vlad-Ioan Suciu
- Department of Neuroscience, “Iuliu Haţieganu” University of Medicine and Pharmacy, 400012 Cluj-Napoca, Romania;
| | - George Adrian Muntean
- Department of Ophthalmology, “Iuliu Haţieganu” University of Medicine and Pharmacy, 400012 Cluj-Napoca, Romania; (C.I.S.); (G.A.M.); (S.D.N.)
| | - Simona Delia Nicoară
- Department of Ophthalmology, “Iuliu Haţieganu” University of Medicine and Pharmacy, 400012 Cluj-Napoca, Romania; (C.I.S.); (G.A.M.); (S.D.N.)
- Department of Ophthalmology, Emergency County Hospital, 400006 Cluj-Napoca, Romania
| |
Collapse
|
9
|
Zhou H, Sun C, Huang H, Fan M, Yang X, Zhou L. Feature-guided attention network for medical image segmentation. Med Phys 2023; 50:4871-4886. [PMID: 36746870 DOI: 10.1002/mp.16253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 01/03/2023] [Accepted: 01/06/2023] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND U-Net and its variations have achieved remarkable performances in medical image segmentation. However, they have two limitations. First, the shallow layer feature of the encoder always contains background noise. Second, semantic gaps exist between the features of the encoder and the decoder. Skip-connections directly connect the encoder to the decoder, which will lead to the fusion of semantically dissimilar feature maps. PURPOSE To overcome these two limitations, this paper proposes a novel medical image segmentation algorithm, called feature-guided attention network, which consists of U-Net, the cross-level attention filtering module (CAFM), and the attention-guided upsampling module (AUM). METHODS In the proposed method, the AUM and the CAFM were introduced into the U-Net, where the AUM learns to filter the background noise in the low-level feature map of the encoder and the CAFM tries to eliminate the semantic gap between the encoder and the decoder. Specifically, the AUM adopts a top-down pathway to use the high-level feature map so as to filter the background noise in the low-level feature map of the encoder. The AUM uses the encoder features to guide the upsampling of the corresponding decoder features, thus eliminating the semantic gap between them. Four medical image segmentation tasks, including coronary atherosclerotic plaque segmentation (Dataset A), retinal vessel segmentation (Dataset B), skin lesion segmentation (Dataset C), and multiclass retinal edema lesions segmentation (Dataset D), were used to validate the proposed method. RESULTS For Dataset A, the proposed method achieved higher Intersection over Union (IoU) (67.91 ± 3.82 % $67.91\pm 3.82\%$ ), dice (79.39 ± 3.37 % $79.39\pm 3.37\%$ ), accuracy (98.39 ± 0.34 % $98.39\pm 0.34\%$ ), and sensitivity (85.10 ± 3.74 % $85.10\pm 3.74\%$ ) than the previous best method: CA-Net. For Dataset B, the proposed method achieved higher sensitivity (83.50%) and accuracy (97.55%) than the previous best method: SCS-Net. For Dataset C, the proposed method had highest IoU (83.47 ± 0.41 % $83.47\pm 0.41\%$ ) and dice (90.81 ± 0.34 % $90.81\pm 0.34\%$ ) than those of all compared previous methods. For Dataset D, the proposed method had highest dice (average: 81.53%; retina edema area [REA]: 83.78%; pigment epithelial detachment [PED] 77.13%), sensitivity (REA: 89.01%; SRF: 85.50%), specificity (REA: 99.35%; PED: 100.00), and accuracy (98.73%) among all compared previous networks. In addition, the number of parameters of the proposed method was 2.43 M, which is less than CA-Net (3.21 M) and CPF-Net (3.07 M). CONCLUSIONS The proposed method demonstrated state-of-the-art performance, outperforming other top-notch medical image segmentation algorithms. The CAFM filtered the background noise in the low-level feature map of the encoder, while the AUM eliminated the semantic gap between the encoder and the decoder. Furthermore, the proposed method was of high computational efficiency.
Collapse
Affiliation(s)
- Hao Zhou
- National Key Laboratory of Science and Technology of Underwater Vehicle, Harbin Engineering University, Harbin, China
| | - Chaoyu Sun
- Fourth Affiliated Hospital, Harbin Medical University, Harbin, China
| | - Hai Huang
- National Key Laboratory of Science and Technology of Underwater Vehicle, Harbin Engineering University, Harbin, China
| | - Mingyu Fan
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, China
| | - Xu Yang
- State Key Laboratory of Management and Control for Complex System, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Linxiao Zhou
- Fourth Affiliated Hospital, Harbin Medical University, Harbin, China
| |
Collapse
|
10
|
Tripathi A, Kumar P, Tulsani A, Chakrapani PK, Maiya G, Bhandary SV, Mayya V, Pathan S, Achar R, Acharya UR. Fuzzy Logic-Based System for Identifying the Severity of Diabetic Macular Edema from OCT B-Scan Images Using DRIL, HRF, and Cystoids. Diagnostics (Basel) 2023; 13:2550. [PMID: 37568913 PMCID: PMC10416860 DOI: 10.3390/diagnostics13152550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 07/19/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
Diabetic Macular Edema (DME) is a severe ocular complication commonly found in patients with diabetes. The condition can precipitate a significant drop in VA and, in extreme cases, may result in irreversible vision loss. Optical Coherence Tomography (OCT), a technique that yields high-resolution retinal images, is often employed by clinicians to assess the extent of DME in patients. However, the manual interpretation of OCT B-scan images for DME identification and severity grading can be error-prone, with false negatives potentially resulting in serious repercussions. In this paper, we investigate an Artificial Intelligence (AI) driven system that offers an end-to-end automated model, designed to accurately determine DME severity using OCT B-Scan images. This model operates by extracting specific biomarkers such as Disorganization of Retinal Inner Layers (DRIL), Hyper Reflective Foci (HRF), and cystoids from the OCT image, which are then utilized to ascertain DME severity. The rules guiding the fuzzy logic engine are derived from contemporary research in the field of DME and its association with various biomarkers evident in the OCT image. The proposed model demonstrates high efficacy, identifying images with DRIL with 93.3% accuracy and successfully segmenting HRF and cystoids from OCT images with dice similarity coefficients of 91.30% and 95.07% respectively. This study presents a comprehensive system capable of accurately grading DME severity using OCT B-scan images, serving as a potentially invaluable tool in the clinical assessment and treatment of DME.
Collapse
Affiliation(s)
- Aditya Tripathi
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Preetham Kumar
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Akshat Tulsani
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Pavithra Kodiyalbail Chakrapani
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Geetha Maiya
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Sulatha V. Bhandary
- Department of Ophthalmology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India
| | - Veena Mayya
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Sameena Pathan
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Raghavendra Achar
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U. Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield Central, QLD 4300, Australia
| |
Collapse
|
11
|
Darooei R, Nazari M, Kafieh R, Rabbani H. Optimal Deep Learning Architecture for Automated Segmentation of Cysts in OCT Images Using X-Let Transforms. Diagnostics (Basel) 2023; 13:1994. [PMID: 37370889 PMCID: PMC10297540 DOI: 10.3390/diagnostics13121994] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/22/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
The retina is a thin, light-sensitive membrane with a multilayered structure found in the back of the eyeball. There are many types of retinal disorders. The two most prevalent retinal illnesses are Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). Optical Coherence Tomography (OCT) is a vital retinal imaging technology. X-lets (such as curvelet, DTCWT, contourlet, etc.) have several benefits in image processing and analysis. They can capture both local and non-local features of an image simultaneously. The aim of this paper is to propose an optimal deep learning architecture based on sparse basis functions for the automated segmentation of cystic areas in OCT images. Different X-let transforms were used to produce different network inputs, including curvelet, Dual-Tree Complex Wavelet Transform (DTCWT), circlet, and contourlet. Additionally, three different combinations of these transforms are suggested to achieve more accurate segmentation results. Various metrics, including Dice coefficient, sensitivity, false positive ratio, Jaccard index, and qualitative results, were evaluated to find the optimal networks and combinations of the X-let's sub-bands. The proposed network was tested on both original and noisy datasets. The results show the following facts: (1) contourlet achieves the optimal results between different combinations; (2) the five-channel decomposition using high-pass sub-bands of contourlet transform achieves the best performance; and (3) the five-channel decomposition using high-pass sub-bands formations out-performs the state-of-the-art methods, especially in the noisy dataset. The proposed method has the potential to improve the accuracy and speed of the segmentation process in clinical settings, facilitating the diagnosis and treatment of retinal diseases.
Collapse
Affiliation(s)
- Reza Darooei
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (R.D.); (R.K.)
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran
| | - Milad Nazari
- Department of Molecular Biology and Genetics, Aarhus University, 8200 Aarhus, Denmark;
- The Danish Research Institute of Translational Neuroscience (DANDRITE), Aarhus University, 8200 Aarhus, Denmark
| | - Rahele Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (R.D.); (R.K.)
- Department of Engineering, Durham University, South Road, Durham DH1 3RW, UK
| | - Hossein Rabbani
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran; (R.D.); (R.K.)
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan 8174673461, Iran
| |
Collapse
|
12
|
Feng H, Chen J, Zhang Z, Lou Y, Zhang S, Yang W. A bibliometric analysis of artificial intelligence applications in macular edema: exploring research hotspots and Frontiers. Front Cell Dev Biol 2023; 11:1174936. [PMID: 37255600 PMCID: PMC10225517 DOI: 10.3389/fcell.2023.1174936] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/02/2023] [Indexed: 06/01/2023] Open
Abstract
Background: Artificial intelligence (AI) is used in ophthalmological disease screening and diagnostics, medical image diagnostics, and predicting late-disease progression rates. We reviewed all AI publications associated with macular edema (ME) research Between 2011 and 2022 and performed modeling, quantitative, and qualitative investigations. Methods: On 1st February 2023, we screened the Web of Science Core Collection for AI applications related to ME, from which 297 studies were identified and analyzed (2011-2022). We collected information on: publications, institutions, country/region, keywords, journal name, references, and research hotspots. Literature clustering networks and Frontier knowledge bases were investigated using bibliometrix-BiblioShiny, VOSviewer, and CiteSpace bibliometric platforms. We used the R "bibliometrix" package to synopsize our observations, enumerate keywords, visualize collaboration networks between countries/regions, and generate a topic trends plot. VOSviewer was used to examine cooperation between institutions and identify citation relationships between journals. We used CiteSpace to identify clustering keywords over the timeline and identify keywords with the strongest citation bursts. Results: In total, 47 countries published AI studies related to ME; the United States had the highest H-index, thus the greatest influence. China and the United States cooperated most closely between all countries. Also, 613 institutions generated publications - the Medical University of Vienna had the highest number of studies. This publication record and H-index meant the university was the most influential in the ME field. Reference clusters were also categorized into 10 headings: retinal Optical Coherence Tomography (OCT) fluid detection, convolutional network models, deep learning (DL)-based single-shot predictions, retinal vascular disease, diabetic retinopathy (DR), convolutional neural networks (CNNs), automated macular pathology diagnosis, dry age-related macular degeneration (DARMD), class weight, and advanced DL architecture systems. Frontier keywords were represented by diabetic macular edema (DME) (2021-2022). Conclusion: Our review of the AI-related ME literature was comprehensive, systematic, and objective, and identified future trends and current hotspots. With increased DL outputs, the ME research focus has gradually shifted from manual ME examinations to automatic ME detection and associated symptoms. In this review, we present a comprehensive and dynamic overview of AI in ME and identify future research areas.
Collapse
Affiliation(s)
- Haiwen Feng
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Jiaqi Chen
- Department of Software Engineering, School of Software, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Zhichang Zhang
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Yan Lou
- Department of Computer, School of Intelligent Medicine, China Medical University, Shenyang, Liaoning, China
| | - Shaochong Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| |
Collapse
|
13
|
Wang J, Qu A, Wang Q, Zhao Q, Liu J, Wu Q. TT-Net: Tensorized Transformer Network for 3D medical image segmentation. Comput Med Imaging Graph 2023; 107:102234. [PMID: 37075619 DOI: 10.1016/j.compmedimag.2023.102234] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/09/2023] [Accepted: 03/24/2023] [Indexed: 04/21/2023]
Abstract
Accurate segmentation of organs, tissues and lesions is essential for computer-assisted diagnosis. Previous works have achieved success in the field of automatic segmentation. However, there exists two limitations. (1) They are remain challenged by complex conditions, such as segmentation target is variable in location, size and shape, especially for different imaging modalities. (2) Existing transformer-based networks suffer from a high parametric complexity. To solve these limitations, we propose a new Tensorized Transformer Network (TT-Net). In this paper, (1) Multi-scale transformer with layers-fusion is proposed to faithfully capture context interaction information. (2) Cross Shared Attention (CSA) module that based on pHash similarity fusion (pSF) is well-designed to extract the global multi-variate dependency features. (3) Tensorized Self-Attention (TSA) module is proposed to deal with the large number of parameters, which can also be easily embedded into other models. In addition, TT-Net gains a good explainability through visualizing the transformer layers. The proposed method is evaluated on three widely accepted public datasets and one clinical dataset, which contains different imaging modalities. Comprehensive results show that TT-Net outperforms other state-of-the-art methods for the four different segmentation tasks. Besides, the compression module which can be easily embedded into other transformer-based methods achieves lower computation with comparable segmentation performance.
Collapse
Affiliation(s)
- Jing Wang
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China
| | - Aixi Qu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China
| | - Qing Wang
- QiLu Hospital of Shandong University, Radiology Department, Jinan 250012, China
| | - Qibin Zhao
- RIKEN Center for Advanced Intelligence Project, Japan
| | - Ju Liu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China; Shandong University, Institute of Brain and Brain-Inspired Science, Jinan 250012, China.
| | - Qiang Wu
- Shandong University, School of Information Science and Engineering, Qingdao 266237, China; Shandong University, Institute of Brain and Brain-Inspired Science, Jinan 250012, China.
| |
Collapse
|
14
|
Karn PK, Abdulla WH. On Machine Learning in Clinical Interpretation of Retinal Diseases Using OCT Images. Bioengineering (Basel) 2023; 10:bioengineering10040407. [PMID: 37106594 PMCID: PMC10135895 DOI: 10.3390/bioengineering10040407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/21/2023] [Accepted: 03/22/2023] [Indexed: 03/29/2023] Open
Abstract
Optical coherence tomography (OCT) is a noninvasive imaging technique that provides high-resolution cross-sectional retina images, enabling ophthalmologists to gather crucial information for diagnosing various retinal diseases. Despite its benefits, manual analysis of OCT images is time-consuming and heavily dependent on the personal experience of the analyst. This paper focuses on using machine learning to analyse OCT images in the clinical interpretation of retinal diseases. The complexity of understanding the biomarkers present in OCT images has been a challenge for many researchers, particularly those from nonclinical disciplines. This paper aims to provide an overview of the current state-of-the-art OCT image processing techniques, including image denoising and layer segmentation. It also highlights the potential of machine learning algorithms to automate the analysis of OCT images, reducing time consumption and improving diagnostic accuracy. Using machine learning in OCT image analysis can mitigate the limitations of manual analysis methods and provide a more reliable and objective approach to diagnosing retinal diseases. This paper will be of interest to ophthalmologists, researchers, and data scientists working in the field of retinal disease diagnosis and machine learning. By presenting the latest advancements in OCT image analysis using machine learning, this paper will contribute to the ongoing efforts to improve the diagnostic accuracy of retinal diseases.
Collapse
|
15
|
Mousavi N, Monemian M, Ghaderi Daneshmand P, Mirmohammadsadeghi M, Zekri M, Rabbani H. Cyst identification in retinal optical coherence tomography images using hidden Markov model. Sci Rep 2023; 13:12. [PMID: 36593300 PMCID: PMC9807649 DOI: 10.1038/s41598-022-27243-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023] Open
Abstract
Optical Coherence Tomography (OCT) is a useful imaging modality facilitating the capturing process from retinal layers. In the salient diseases of retina, cysts are formed in retinal layers. Therefore, the identification of cysts in the retinal layers is of great importance. In this paper, a new method is proposed for the rapid detection of cystic OCT B-scans. In the proposed method, a Hidden Markov Model (HMM) is used for mathematically modelling the existence of cyst. In fact, the existence of cyst in the image can be considered as a hidden state. Since the existence of cyst in an OCT B-scan depends on the existence of cyst in the previous B-scans, HMM is an appropriate tool for modelling this process. In the first phase, a number of features are extracted which are Harris, KAZE, HOG, SURF, FAST, Min-Eigen and feature extracted by deep AlexNet. It is shown that the feature with the best discriminating power is the feature extracted by AlexNet. The features extracted in the first phase are used as observation vectors to estimate the HMM parameters. The evaluation results show the improved performance of HMM in terms of accuracy.
Collapse
Affiliation(s)
- Niloofarsadat Mousavi
- grid.411751.70000 0000 9908 3264Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Maryam Monemian
- grid.411036.10000 0001 1498 685XMedical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parisa Ghaderi Daneshmand
- grid.411036.10000 0001 1498 685XMedical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | | | - Maryam Zekri
- grid.411751.70000 0000 9908 3264Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, Iran
| | - Hossein Rabbani
- grid.411036.10000 0001 1498 685XMedical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
16
|
Pigment epithelial detachment composition indices (PEDCI) in neovascular age-related macular degeneration. Sci Rep 2023; 13:68. [PMID: 36593323 PMCID: PMC9807558 DOI: 10.1038/s41598-022-27078-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 12/26/2022] [Indexed: 01/03/2023] Open
Abstract
We provide an automated analysis of the pigment epithelial detachments (PEDs) in neovascular age-related macular degeneration (nAMD) and estimate areas of serous, neovascular, and fibrous tissues within PEDs. A retrospective analysis of high-definition spectral-domain OCT B-scans from 43 eyes of 37 patients with nAMD with presence of fibrovascular PED was done. PEDs were manually segmented and then filtered using 2D kernels to classify pixels within the PED as serous, neovascular, or fibrous. A set of PED composition indices were calculated on a per-image basis using relative PED area of serous (PEDCI-S), neovascular (PEDCI-N), and fibrous (PEDCI-F) tissue. Accuracy of segmentation and classification within the PED were graded in masked fashion. Mean overall intra-observer repeatability and inter-observer reproducibility were 0.86 ± 0.07 and 0.86 ± 0.03 respectively using intraclass correlations. The mean graded scores were 96.99 ± 8.18, 92.12 ± 7.97, 91.48 ± 8.93, and 92.29 ± 8.97 for segmentation, serous, neovascular, and fibrous respectively. Mean (range) PEDCI-S, PEDCI-N, and PEDCI-F were 0.253 (0-0.952), 0.554 (0-1), and 0.193 (0-0.693). A kernel-based image processing approach demonstrates potential for approximating PED composition. Evaluating follow up changes during nAMD treatment with respect to PEDCI would be useful for further clinical applications.
Collapse
|
17
|
Du J, Huang M, Liu L. AI-Aided Disease Prediction in Visualized Medicine. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1199:107-126. [PMID: 37460729 DOI: 10.1007/978-981-32-9902-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Artificial intelligence (AI) is playing a vitally important role in promoting the revolution of future technology. Healthcare is one of the promising applications in AI, which covers medical imaging, diagnosis, robotics, disease prediction, pharmacy, health management, and hospital management. Numbers of achievements that made in these fields overturn every aspect in traditional healthcare system. Therefore, to understand the state-of-art AI in healthcare, as well as the chances and obstacles in its development, the applications of AI in disease detection and outlook and the future trends of AI-aided disease prediction were discussed in this chapter.
Collapse
Affiliation(s)
- Juan Du
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
| | - Mengen Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Lin Liu
- Tianjin Key Laboratory of Retinal Functions and Diseases, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
18
|
Zhang Z, Wang Y, Zhang H, Samusak A, Rao H, Xiao C, Abula M, Cao Q, Dai Q. Artificial intelligence-assisted diagnosis of ocular surface diseases. Front Cell Dev Biol 2023; 11:1133680. [PMID: 36875760 PMCID: PMC9981656 DOI: 10.3389/fcell.2023.1133680] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/08/2023] [Indexed: 02/19/2023] Open
Abstract
With the rapid development of computer technology, the application of artificial intelligence (AI) in ophthalmology research has gained prominence in modern medicine. Artificial intelligence-related research in ophthalmology previously focused on the screening and diagnosis of fundus diseases, particularly diabetic retinopathy, age-related macular degeneration, and glaucoma. Since fundus images are relatively fixed, their standards are easy to unify. Artificial intelligence research related to ocular surface diseases has also increased. The main issue with research on ocular surface diseases is that the images involved are complex, with many modalities. Therefore, this review aims to summarize current artificial intelligence research and technologies used to diagnose ocular surface diseases such as pterygium, keratoconus, infectious keratitis, and dry eye to identify mature artificial intelligence models that are suitable for research of ocular surface diseases and potential algorithms that may be used in the future.
Collapse
Affiliation(s)
- Zuhui Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Ying Wang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Hongzhen Zhang
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Arzigul Samusak
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Huimin Rao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Chun Xiao
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Muhetaer Abula
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China
| | - Qixin Cao
- Huzhou Traditional Chinese Medicine Hospital Affiliated to Zhejiang University of Traditional Chinese Medicine, Huzhou, China
| | - Qi Dai
- The First People's Hospital of Aksu District in Xinjiang, Aksu City, China.,National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
19
|
|
20
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
21
|
Bogunović H, Mares V, Reiter GS, Schmidt-Erfurth U. Predicting treat-and-extend outcomes and treatment intervals in neovascular age-related macular degeneration from retinal optical coherence tomography using artificial intelligence. Front Med (Lausanne) 2022; 9:958469. [PMID: 36017006 PMCID: PMC9396241 DOI: 10.3389/fmed.2022.958469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 07/05/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeTo predict visual outcomes and treatment needs in a treat & extend (T&E) regimen in neovascular age-related macular degeneration (nAMD) using a machine learning model based on quantitative optical coherence tomography (OCT) imaging biomarkers.Materials and methodsStudy eyes of 270 treatment-naïve subjects, randomized to receiving ranibizumab therapy in the T&E arm of a randomized clinical trial were considered. OCT volume scans were processed at baseline and at the first follow-up visit 4 weeks later. Automated image segmentation was performed, where intraretinal (IRF), subretinal (SRF) fluid, pigment epithelial detachment (PED), hyperreflective foci, and the photoreceptor layer were delineated using a convolutional neural network (CNN). A set of respective quantitative imaging biomarkers were computed across an Early Treatment Diabetic Retinopathy Study (ETDRS) grid to describe the retinal pathomorphology spatially and its change after the first injection. Lastly, using the computed set of OCT features and available clinical and demographic information, predictive models of outcomes and retreatment intervals were built using machine learning and their performance evaluated with a 10-fold cross-validation.ResultsData of 228 evaluable patients were included, as some had missing scans or were lost to follow-up. Of those patients, 55% reached and maintained long (8, 10, 12 weeks) and another 45% stayed at short (4, 6 weeks) treatment intervals. This provides further evidence for a high disease activity in a major proportion of patients. The model predicted the extendable treatment interval group with an AUROC of 0.71, and the visual outcome with an AUROC of up to 0.87 when utilizing both, clinical and imaging features. The volume of SRF and the volume of IRF, remaining at the first follow-up visit, were found to be the most important predictive markers for treatment intervals and visual outcomes, respectively, supporting the important role of quantitative fluid parameters on OCT.ConclusionThe proposed Artificial intelligence (AI) methodology was able to predict visual outcomes and retreatment intervals of a T&E regimen from a single injection. The result of this study is an urgently needed step toward AI-supported management of patients with active and progressive nAMD.
Collapse
Affiliation(s)
- Hrvoje Bogunović
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria
| | - Virginia Mares
- Department of Ophthalmology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Gregor S. Reiter
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria
- *Correspondence: Ursula Schmidt-Erfurth,
| |
Collapse
|
22
|
Research on Semantic Segmentation Method of Macular Edema in Retinal OCT Images Based on Improved Swin-Unet. ELECTRONICS 2022. [DOI: 10.3390/electronics11152294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Optical coherence tomography (OCT), as a new type of tomography technology, has the characteristics of non-invasive, real-time imaging and high sensitivity, and is currently an important medical imaging tool to assist ophthalmologists in the screening, diagnosis, and follow-up treatment of patients with macular disease. In order to solve the problem of irregular occurrence area of diabetic retinopathy macular edema (DME), multi-scale and multi-region cluster of macular edema, which leads to inaccurate segmentation of the edema area, an improved Swin-Unet networks model was proposed for automatic semantic segmentation of macular edema lesion areas in OCT images. Firstly, in the deep bottleneck of the Swin-Unet network, the Resnet network layer was used to increase the extraction of pairs of sub-feature images. Secondly, the Swin Transformer block and skip connection structure were used for global and local learning, and the regions after semantic segmentation were morphologically smoothed and post-processed. Finally, the proposed method was performed on the macular edema patient dataset publicly available at Duke University, and was compared with previous segmentation methods. The experimental results show that the proposed method can not only improve the overall semantic segmentation accuracy of retinal macular edema, but also further to improve the semantic segmentation effect of multi-scale and multi-region edema regions.
Collapse
|
23
|
Tang W, Ye Y, Chen X, Shi F, Xiang D, Chen Z, Zhu W. Multi-class retinal fluid joint segmentation based on cascaded convolutional neural networks. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 05/25/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Retinal fluid mainly includes intra-retinal fluid (IRF), sub-retinal fluid (SRF) and pigment epithelial detachment (PED), whose accurate segmentation in optical coherence tomography (OCT) image is of great importance to the diagnosis and treatment of the relative fundus diseases. Approach. In this paper, a novel two-stage multi-class retinal fluid joint segmentation framework based on cascaded convolutional neural networks is proposed. In the pre-segmentation stage, a U-shape encoder–decoder network is adopted to acquire the retinal mask and generate a retinal relative distance map, which can provide the spatial prior information for the next fluid segmentation. In the fluid segmentation stage, an improved context attention and fusion network based on context shrinkage encode module and multi-scale and multi-category semantic supervision module (named as ICAF-Net) is proposed to jointly segment IRF, SRF and PED. Main results. the proposed segmentation framework was evaluated on the dataset of RETOUCH challenge. The average Dice similarity coefficient, intersection over union and accuracy (Acc) reach 76.39%, 64.03% and 99.32% respectively. Significance. The proposed framework can achieve good performance in the joint segmentation of multi-class fluid in retinal OCT images and outperforms some state-of-the-art segmentation networks.
Collapse
|
24
|
Xing G, Chen L, Wang H, Zhang J, Sun D, Xu F, Lei J, Xu X. Multi-Scale Pathological Fluid Segmentation in OCT With a Novel Curvature Loss in Convolutional Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1547-1559. [PMID: 35015634 DOI: 10.1109/tmi.2022.3142048] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The segmentation of pathological fluid lesions in optical coherence tomography (OCT), including intraretinal fluid, subretinal fluid, and pigment epithelial detachment, is of great importance for the diagnosis and treatment of various eye diseases such as neovascular age-related macular degeneration and diabetic macular edema. Although significant progress has been achieved with the rapid development of fully convolutional neural networks (FCN) in recent years, some important issues remain unsolved. First, pathological fluid lesions in OCT show large variations in location, size, and shape, imposing challenges on the design of FCN architecture. Second, fluid lesions should be continuous regions without holes inside. But the current architectures lack the capability to preserve the shape prior information. In this study, we introduce an FCN architecture for the simultaneous segmentation of three types of pathological fluid lesions in OCT. First, attention gate and spatial pyramid pooling modules are employed to improve the ability of the network to extract multi-scale objects. Then, we introduce a novel curvature regularization term in the loss function to incorporate shape prior information. The proposed method was extensively evaluated on public and clinical datasets with significantly improved performance compared with the state-of-the-art methods.
Collapse
|
25
|
López-Varela E, Vidal PL, Pascual NO, Novo J, Ortega M. Fully-Automatic 3D Intuitive Visualization of Age-Related Macular Degeneration Fluid Accumulations in OCT Cubes. J Digit Imaging 2022; 35:1271-1282. [PMID: 35513586 PMCID: PMC9582110 DOI: 10.1007/s10278-022-00643-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 04/06/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related macular degeneration is the leading cause of vision loss in developed countries, and wet-type AMD requires urgent treatment and rapid diagnosis because it causes rapid irreversible vision loss. Currently, AMD diagnosis is mainly carried out using images obtained by optical coherence tomography. This diagnostic process is performed by human clinicians, so human error may occur in some cases. Therefore, fully automatic methodologies are highly desirable adding a layer of robustness to the diagnosis. In this work, a novel computer-aided diagnosis and visualization methodology is proposed for the rapid identification and visualization of wet AMD. We adapted a convolutional neural network for segmentation of a similar domain of medical images to the problem of wet AMD segmentation, taking advantage of transfer learning, which allows us to work with and exploit a reduced number of samples. We generate a 3D intuitive visualization where the existence, position and severity of the fluid were represented in a clear and intuitive way to facilitate the analysis of the clinicians. The 3D visualization is robust and accurate, obtaining satisfactory 0.949 and 0.960 Dice coefficients in the different evaluated OCT cube configurations, allowing to quickly assess the presence and extension of the fluid associated to wet AMD.
Collapse
Affiliation(s)
- Emilio López-Varela
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Plácido L. Vidal
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Nuria Olivier Pascual
- Servizo de Oftalmoloxía, Complexo Hospitalario Universitario de Ferrol, CHUF, Av. da Residencia, S/N, Ferrol, 15405 Spain
| | - Jorge Novo
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| | - Marcos Ortega
- Grupo VARPA, Instituto de investigación Biomédica de A Coruña (INIBIC), Xubias de Arriba, 84, A Coruña, 15006 Spain
- Centro de investigación CITIC, Universidade da Coruña, Campus de Elviña, s/n, A Coruña, 15071 Spain
| |
Collapse
|
26
|
Recent Advanced Deep Learning Architectures for Retinal Fluid Segmentation on Optical Coherence Tomography Images. SENSORS 2022; 22:s22083055. [PMID: 35459040 PMCID: PMC9029682 DOI: 10.3390/s22083055] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022]
Abstract
With non-invasive and high-resolution properties, optical coherence tomography (OCT) has been widely used as a retinal imaging modality for the effective diagnosis of ophthalmic diseases. The retinal fluid is often segmented by medical experts as a pivotal biomarker to assist in the clinical diagnosis of age-related macular diseases, diabetic macular edema, and retinal vein occlusion. In recent years, the advanced machine learning methods, such as deep learning paradigms, have attracted more and more attention from academia in the retinal fluid segmentation applications. The automatic retinal fluid segmentation based on deep learning can improve the semantic segmentation accuracy and efficiency of macular change analysis, which has potential clinical implications for ophthalmic pathology detection. This article summarizes several different deep learning paradigms reported in the up-to-date literature for the retinal fluid segmentation in OCT images. The deep learning architectures include the backbone of convolutional neural network (CNN), fully convolutional network (FCN), U-shape network (U-Net), and the other hybrid computational methods. The article also provides a survey on the prevailing OCT image datasets used in recent retinal segmentation investigations. The future perspectives and some potential retinal segmentation directions are discussed in the concluding context.
Collapse
|
27
|
Directional analysis of intensity changes for determining the existence of cyst in optical coherence tomography images. Sci Rep 2022; 12:2105. [PMID: 35136133 PMCID: PMC8825816 DOI: 10.1038/s41598-022-06099-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Accepted: 01/24/2022] [Indexed: 11/23/2022] Open
Abstract
Diabetic retinopathy (DR) is an important cause of blindness in people with the long history of diabetes. DR is caused due to the damage to blood vessels in the retina. One of the most important manifestations of DR is the formation of fluid-filled regions between retinal layers. The evaluation of stage and transcribed drugs can be possible through the analysis of retinal Optical Coherence Tomography (OCT) images. Therefore, the detection of cysts in OCT images and the is of considerable importance. In this paper, a fast method is proposed to determine the status of OCT images as cystic or non-cystic. The method consists of three phases which are pre-processing, boundary pixel determination and post-processing. After applying a noise reduction method in the pre-processing step, the method finds the pixels which are the boundary pixels of cysts. This process is performed by finding the significant intensity changes in the vertical direction and considering rectangular patches around the candidate pixels. The patches are verified whether or not they contain enough pixels making considerable diagonal intensity changes. Then, a shadow omission method is proposed in the post-processing phase to extract the shadow regions which can be mistakenly considered as cystic areas. Then, the pixels extracted in the previous phase that are near the shadow regions are removed to prevent the production of false positive cases. The performance of the proposed method is evaluated in terms of sensitivity and specificity on real datasets. The experimental results show that the proposed method produces outstanding results from both accuracy and speed points of view.
Collapse
|
28
|
Fully-automated atrophy segmentation in dry age-related macular degeneration in optical coherence tomography. Sci Rep 2021; 11:21893. [PMID: 34751189 PMCID: PMC8575929 DOI: 10.1038/s41598-021-01227-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/23/2021] [Indexed: 11/09/2022] Open
Abstract
Age-related macular degeneration (AMD) is a progressive retinal disease, causing vision loss. A more detailed characterization of its atrophic form became possible thanks to the introduction of Optical Coherence Tomography (OCT). However, manual atrophy quantification in 3D retinal scans is a tedious task and prevents taking full advantage of the accurate retina depiction. In this study we developed a fully automated algorithm segmenting Retinal Pigment Epithelial and Outer Retinal Atrophy (RORA) in dry AMD on macular OCT. 62 SD-OCT scans from eyes with atrophic AMD (57 patients) were collected and split into train and test sets. The training set was used to develop a Convolutional Neural Network (CNN). The performance of the algorithm was established by cross validation and comparison to the test set with ground-truth annotated by two graders. Additionally, the effect of using retinal layer segmentation during training was investigated. The algorithm achieved mean Dice scores of 0.881 and 0.844, sensitivity of 0.850 and 0.915 and precision of 0.928 and 0.799 in comparison with Expert 1 and Expert 2, respectively. Using retinal layer segmentation improved the model performance. The proposed model identified RORA with performance matching human experts. It has a potential to rapidly identify atrophy with high consistency.
Collapse
|
29
|
Ma D, Lu D, Chen S, Heisler M, Dabiri S, Lee S, Lee H, Ding GW, Sarunic MV, Beg MF. LF-UNet - A novel anatomical-aware dual-branch cascaded deep neural network for segmentation of retinal layers and fluid from optical coherence tomography images. Comput Med Imaging Graph 2021; 94:101988. [PMID: 34717264 DOI: 10.1016/j.compmedimag.2021.101988] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 08/31/2021] [Accepted: 09/11/2021] [Indexed: 11/17/2022]
Abstract
Computer-assistant diagnosis of retinal disease relies heavily on the accurate detection of retinal boundaries and other pathological features such as fluid accumulation. Optical coherence tomography (OCT) is a non-invasive ophthalmological imaging technique that has become a standard modality in the field due to its ability to detect cross-sectional retinal pathologies at the micrometer level. In this work, we presented a novel framework to achieve simultaneous retinal layers and fluid segmentation. A dual-branch deep neural network, termed LF-UNet, was proposed which combines the expansion path of the U-Net and original fully convolutional network, with a dilated network. In addition, we introduced a cascaded network framework to include the anatomical awareness embedded in the volumetric image. Cross validation experiments showed that the proposed LF-UNet has superior performance compared to the state-of-the-art methods, and that incorporating the relative positional map structural prior information could further improve the performance regardless of the network. The generalizability of the proposed network was demonstrated on an independent dataset acquired from the same types of device with different field of view, or images acquired from different device.
Collapse
Affiliation(s)
- Da Ma
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Donghuan Lu
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada; Tencent Jarvis Lab, Shenzhen, China
| | - Shuo Chen
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Morgan Heisler
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Setareh Dabiri
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Sieun Lee
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Hyunwoo Lee
- Division of Neurology, Department of Medicine, University of British Columbia, Canada
| | - Gavin Weiguang Ding
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Marinko V Sarunic
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada
| | - Mirza Faisal Beg
- Simon Fraser University, School of Engineering Science, Burnaby V5A 1S6, Canada.
| |
Collapse
|
30
|
Zéboulon P, Ghazal W, Gatinel D. Corneal Edema Visualization With Optical Coherence Tomography Using Deep Learning: Proof of Concept. Cornea 2021; 40:1267-1275. [PMID: 33410639 DOI: 10.1097/ico.0000000000002640] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Accepted: 11/09/2020] [Indexed: 12/23/2022]
Abstract
PURPOSE Optical coherence tomography (OCT) is essential for the diagnosis and follow-up of corneal edema, but assessment can be challenging in minimal or localized edema. The objective was to develop and validate a novel automated tool to detect and visualize corneal edema with OCT. METHODS We trained a convolutional neural network to classify each pixel in the corneal OCT images as "normal" or "edema" and to generate colored heat maps of the result. The development set included 199 OCT images of normal and edematous corneas. We validated the model's performance on 607 images of normal and edematous corneas of various conditions. The main outcome measure was the edema fraction (EF), defined as the ratio between the number of pixels labeled as edema and those representing the cornea for each scan. Overall accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve were determined to evaluate the model's performance. RESULTS Mean EF was 0.0087 ± 0.01 in the normal scans and 0.805 ± 0.26 in the edema scans (P < 0.0001). Area under the receiver operating characteristic curve for EF in the diagnosis of corneal edema in individual scans was 0.994. The optimal threshold for distinguishing normal from edematous corneas was 6.8%, with an accuracy of 98.7%, sensitivity of 96.4%, and specificity of 100%. CONCLUSIONS The model accurately detected corneal edema and distinguished between normal and edematous cornea OCT scans while providing colored heat maps of edema presence.
Collapse
Affiliation(s)
- Pierre Zéboulon
- Department of Ophthalmology, Rothschild Foundation, Paris, France ; and
| | - Wassim Ghazal
- Department of Ophthalmology, Rothschild Foundation, Paris, France ; and
| | - Damien Gatinel
- Department of Ophthalmology, Rothschild Foundation, Paris, France ; and
- CEROC (Center of Expertise and Research in Optics for Clinicians)
| |
Collapse
|
31
|
Liu X, Wang S, Zhang Y, Liu D, Hu W. Automatic fluid segmentation in retinal optical coherence tomography images using attention based deep learning. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.07.143] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
32
|
Hassan B, Qin S, Ahmed R, Hassan T, Taguri AH, Hashmi S, Werghi N. Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy. Comput Biol Med 2021; 136:104727. [PMID: 34385089 DOI: 10.1016/j.compbiomed.2021.104727] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 07/31/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability. METHOD The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics. RESULTS The proposed RFS-Net model achieved the mean F1 scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF. CONCLUSIONS Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.
Collapse
Affiliation(s)
- Bilal Hassan
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China.
| | - Shiyin Qin
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China; School of Electrical Engineering and Intelligentization, Dongguan University of Technology, Dongguan, 523808, China
| | - Ramsha Ahmed
- School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing, 100083, China
| | - Taimur Hassan
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| | - Abdel Hakeem Taguri
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Shahrukh Hashmi
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Naoufel Werghi
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| |
Collapse
|
33
|
de Moura J, Samagaio G, Novo J, Almuina P, Fernández MI, Ortega M. Joint Diabetic Macular Edema Segmentation and Characterization in OCT Images. J Digit Imaging 2021; 33:1335-1351. [PMID: 32562127 DOI: 10.1007/s10278-020-00360-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
The automatic identification and segmentation of edemas associated with diabetic macular edema (DME) constitutes a crucial ophthalmological issue as they provide useful information for the evaluation of the disease severity. According to clinical knowledge, the DME disorder can be categorized into three main pathological types: serous retinal detachment (SRD), cystoid macular edema (CME), and diffuse retinal thickening (DRT). The implementation of computational systems for their automatic extraction and characterization may help the clinicians in their daily clinical practice, adjusting the diagnosis and therapies and consequently the life quality of the patients. In this context, this paper proposes a fully automatic system for the identification, segmentation and characterization of the three ME types using optical coherence tomography (OCT) images. In the case of SRD and CME edemas, different approaches were implemented adapting graph cuts and active contours for their identification and precise delimitation. In the case of the DRT edemas, given their fuzzy regional appearance that requires a complex extraction process, an exhaustive analysis using a learning strategy was designed, exploiting intensity, texture, and clinical-based information. The different steps of this methodology were validated with a heterogeneous set of 262 OCT images, using the manual labeling provided by an expert clinician. In general terms, the system provided satisfactory results, reaching Dice coefficient scores of 0.8768, 0.7475, and 0.8913 for the segmentation of SRD, CME, and DRT edemas, respectively.
Collapse
Affiliation(s)
- Joaquim de Moura
- Department of Computer Science and Information Technology, University of A Coruña, 15071, A Coruña, Spain. .,CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071, A Coruña, Spain.
| | - Gabriela Samagaio
- Faculty of Engineering, University of Porto, 4200-465, Porto, Portugal
| | - Jorge Novo
- Department of Computer Science and Information Technology, University of A Coruña, 15071, A Coruña, Spain.,CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071, A Coruña, Spain
| | - Pablo Almuina
- Department of Ophthalmology, Complejo Hospitalario Universitario de Santiago, 15706, Santiago de Compostela, Spain
| | - María Isabel Fernández
- Department of Ophthalmology, Complejo Hospitalario Universitario de Santiago, 15706, Santiago de Compostela, Spain.,Instituto Oftalmológico Gómez-Ulla, 15706, Santiago de Compostela, Spain.,University of Santiago de Compostela, 15705, Santiago de Compostela, Spain
| | - Marcos Ortega
- Department of Computer Science and Information Technology, University of A Coruña, 15071, A Coruña, Spain.,CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071, A Coruña, Spain
| |
Collapse
|
34
|
Mantel I, Mosinska A, Bergin C, Polito MS, Guidotti J, Apostolopoulos S, Ciller C, De Zanet S. Automated Quantification of Pathological Fluids in Neovascular Age-Related Macular Degeneration, and Its Repeatability Using Deep Learning. Transl Vis Sci Technol 2021; 10:17. [PMID: 34003996 PMCID: PMC8083067 DOI: 10.1167/tvst.10.4.17] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Purpose To develop a reliable algorithm for the automated identification, localization, and volume measurement of exudative manifestations in neovascular age-related macular degeneration (nAMD), including intraretinal (IRF), subretinal fluid (SRF), and pigment epithelium detachment (PED), using a deep-learning approach. Methods One hundred seven spectral domain optical coherence tomography (OCT) cube volumes were extracted from nAMD eyes. Manual annotation of IRF, SRF, and PED was performed. Ninety-two OCT volumes served as training and validation set, and 15 OCT volumes from different patients as test set. The performance of our fluid segmentation method was quantified by means of pixel-wise metrics and volume correlations and compared to other methods. Repeatability was tested on 42 other eyes with five OCT volume scans acquired on the same day. Results The fully automated algorithm achieved good performance for the detection of IRF, SRF, and PED. The area under the curve for detection, sensitivity, and specificity was 0.97, 0.95, and 0.99, respectively. The correlation coefficients for the fluid volumes were 0.99, 0.99, and 0.91, respectively. The Dice score was 0.73, 0.67, and 0.82, respectively. For the largest volume quartiles the Dice scores were >0.90. Including retinal layer segmentation contributed positively to the performance. The repeatability of volume prediction showed a standard deviations of 4.0 nL, 3.5 nL, and 20.0 nL for IRF, SRF, and PED, respectively. Conclusions The deep-learning algorithm can simultaneously acquire a high level of performance for the identification and volume measurements of IRF, SRF, and PED in nAMD, providing accurate and repeatable predictions. Including layer segmentation during training and squeeze-excite block in the network architecture were shown to boost the performance. Translational Relevance Potential applications include measurements of specific fluid compartments with high reproducibility, assistance in treatment decisions, and the diagnostic or scientific evaluation of relevant subgroups.
Collapse
Affiliation(s)
- Irmela Mantel
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Ciara Bergin
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Maria Sole Polito
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jacopo Guidotti
- Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | | |
Collapse
|
35
|
Liefers B, Taylor P, Alsaedi A, Bailey C, Balaskas K, Dhingra N, Egan CA, Rodrigues FG, Gonzalo CG, Heeren TF, Lotery A, Müller PL, Olvera-Barrios A, Paul B, Schwartz R, Thomas DS, Warwick AN, Tufail A, Sánchez CI. Quantification of Key Retinal Features in Early and Late Age-Related Macular Degeneration Using Deep Learning. Am J Ophthalmol 2021; 226:1-12. [PMID: 33422464 DOI: 10.1016/j.ajo.2020.12.034] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/28/2020] [Accepted: 12/28/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE We sought to develop and validate a deep learning model for segmentation of 13 features associated with neovascular and atrophic age-related macular degeneration (AMD). DESIGN Development and validation of a deep-learning model for feature segmentation. METHODS Data for model development were obtained from 307 optical coherence tomography volumes. Eight experienced graders manually delineated all abnormalities in 2712 B-scans. A deep neural network was trained with these data to perform voxel-level segmentation of the 13 most common abnormalities (features). For evaluation, 112 B-scans from 112 patients with a diagnosis of neovascular AMD were annotated by 4 independent observers. The main outcome measures were Dice score, intraclass correlation coefficient, and free-response receiver operating characteristic curve. RESULTS On 11 of 13 features, the model obtained a mean Dice score of 0.63 ± 0.15, compared with 0.61 ± 0.17 for the observers. The mean intraclass correlation coefficient for the model was 0.66 ± 0.22, compared with 0.62 ± 0.21 for the observers. Two features were not evaluated quantitatively because of a lack of data. Free-response receiver operating characteristic analysis demonstrated that the model scored similar or higher sensitivity per false positives compared with the observers. CONCLUSIONS The quality of the automatic segmentation matches that of experienced graders for most features, exceeding human performance for some features. The quantified parameters provided by the model can be used in the current clinical routine and open possibilities for further research into treatment response outside clinical trials.
Collapse
|
36
|
Elsawy A, Eleiwa T, Chase C, Ozcan E, Tolba M, Feuer W, Abdel-Mottaleb M, Abou Shousha M. Multidisease Deep Learning Neural Network for the Diagnosis of Corneal Diseases. Am J Ophthalmol 2021; 226:252-261. [PMID: 33529589 DOI: 10.1016/j.ajo.2021.01.018] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Revised: 01/19/2021] [Accepted: 01/25/2021] [Indexed: 01/12/2023]
Abstract
PURPOSE To report a multidisease deep learning diagnostic network (MDDN) of common corneal diseases: dry eye syndrome (DES), Fuchs endothelial dystrophy (FED), and keratoconus (KCN) using anterior segment optical coherence tomography (AS-OCT) images. STUDY DESIGN Development of a deep learning neural network diagnosis algorithm. METHODS A total of 158,220 AS-OCT images from 879 eyes of 478 subjects were used to develop and validate a classification deep network. After a quality check, the network was trained and validated using 134,460 images. We tested the network using a test set of consecutive patients involving 23,760 AS-OCT images of 132 eyes of 69 patients. The area under receiver operating characteristic curve (AUROC), area under precision-recall curve (AUPRC), and F1 score and 95% confidence intervals (CIs) were computed. RESULTS The MDDN achieved eye-level AUROCs >0.99 (95% CI: 0.90, 1.0), AUPRCs > 0.96 (95% CI: 0.90, 1.0), and F1 scores > 0.90 (95% CI: 0.81, 1.0) for DES, FED, and KCN, respectively. CONCLUSIONS MDDN is a novel diagnostic tool for corneal diseases that can be used to automatically diagnose KCN, FED, and DES using only AS-OCT images.
Collapse
Affiliation(s)
- Amr Elsawy
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami; Electrical and Computer Engineering, University of Miami, Coral Gables
| | - Taher Eleiwa
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami; Department of Ophthalmology, Faculty of Medicine, Benha University, Egypt
| | - Collin Chase
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami
| | - Eyup Ozcan
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami; Net Eye Medical Center, Gaziantep, Turkey
| | - Mohamed Tolba
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami
| | - William Feuer
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami
| | | | - Mohamed Abou Shousha
- Bascom Palmer Eye institute, Miller School of Medicine, University of Miami, Miami; Electrical and Computer Engineering, University of Miami, Coral Gables; Biomedical Engineering, University of Miami, Coral Gables, Florida, USA.
| |
Collapse
|
37
|
Sappa LB, Okuwobi IP, Li M, Zhang Y, Xie S, Yuan S, Chen Q. RetFluidNet: Retinal Fluid Segmentation for SD-OCT Images Using Convolutional Neural Network. J Digit Imaging 2021; 34:691-704. [PMID: 34080105 PMCID: PMC8329142 DOI: 10.1007/s10278-021-00459-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 12/03/2020] [Accepted: 04/29/2021] [Indexed: 11/25/2022] Open
Abstract
Age-related macular degeneration (AMD) is one of the leading causes of irreversible blindness and is characterized by fluid-related accumulations such as intra-retinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). Spectral-domain optical coherence tomography (SD-OCT) is the primary modality used to diagnose AMD, yet it does not have algorithms that directly detect and quantify the fluid. This work presents an improved convolutional neural network (CNN)-based architecture called RetFluidNet to segment three types of fluid abnormalities from SD-OCT images. The model assimilates different skip-connect operations and atrous spatial pyramid pooling (ASPP) to integrate multi-scale contextual information; thus, achieving the best performance. This work also investigates between consequential and comparatively inconsequential hyperparameters and skip-connect techniques for fluid segmentation from the SD-OCT image to indicate the starting choice for future related researches. RetFluidNet was trained and tested on SD-OCT images from 124 patients and achieved an accuracy of 80.05%, 92.74%, and 95.53% for IRF, PED, and SRF, respectively. RetFluidNet showed significant improvement over competitive works to be clinically applicable in reasonable accuracy and time efficiency. RetFluidNet is a fully automated method that can support early detection and follow-up of AMD.
Collapse
Affiliation(s)
- Loza Bekalo Sappa
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Idowu Paul Okuwobi
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Sha Xie
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital With Nanjing Medical University, 300 Guangzhou Road, Nanjing, 210029, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, 200 Xiaolingwei, Nanjing, 210094, China.
| |
Collapse
|
38
|
Soltanian-Zadeh S, Kurokawa K, Liu Z, Zhang F, Saeedi O, Hammer DX, Miller DT, Farsiu S. Weakly supervised individual ganglion cell segmentation from adaptive optics OCT images for glaucomatous damage assessment. OPTICA 2021; 8:642-651. [PMID: 35174258 PMCID: PMC8846574 DOI: 10.1364/optica.418274] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Cell-level quantitative features of retinal ganglion cells (GCs) are potentially important biomarkers for improved diagnosis and treatment monitoring of neurodegenerative diseases such as glaucoma, Parkinson's disease, and Alzheimer's disease. Yet, due to limited resolution, individual GCs cannot be visualized by commonly used ophthalmic imaging systems, including optical coherence tomography (OCT), and assessment is limited to gross layer thickness analysis. Adaptive optics OCT (AO-OCT) enables in vivo imaging of individual retinal GCs. We present an automated segmentation of GC layer (GCL) somas from AO-OCT volumes based on weakly supervised deep learning (named WeakGCSeg), which effectively utilizes weak annotations in the training process. Experimental results show that WeakGCSeg is on par with or superior to human experts and is superior to other state-of-the-art networks. The automated quantitative features of individual GCLs show an increase in structure-function correlation in glaucoma subjects compared to using thickness measures from OCT images. Our results suggest that by automatic quantification of GC morphology, WeakGCSeg can potentially alleviate a major bottleneck in using AO-OCT for vision research.
Collapse
Affiliation(s)
| | - Kazuhiro Kurokawa
- School of Optometry, Indiana University, Bloomington, Indiana 47405, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, Maryland 20993, USA
| | - Furu Zhang
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, Maryland 20993, USA
| | - Osamah Saeedi
- Department of Ophthalmology and Visual Sciences, University of Maryland Medical Center, Baltimore, Maryland 21201, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, Maryland 20993, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, Indiana 47405, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina 27710, USA
- Corresponding author:
| |
Collapse
|
39
|
Luo Y, Xu Q, Jin R, Wu M, Liu L. Automatic detection of retinopathy with optical coherence tomography images via a semi-supervised deep learning method. BIOMEDICAL OPTICS EXPRESS 2021; 12:2684-2702. [PMID: 34123497 PMCID: PMC8176801 DOI: 10.1364/boe.418364] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/27/2021] [Accepted: 04/02/2021] [Indexed: 05/03/2023]
Abstract
Automatic detection of retinopathy via computer vision techniques is of great importance for clinical applications. However, traditional deep learning based methods in computer vision require a large amount of labeled data, which are expensive and may not be available in clinical applications. To mitigate this issue, in this paper, we propose a semi-supervised deep learning method built upon pre-trained VGG-16 and virtual adversarial training (VAT) for the detection of retinopathy with optical coherence tomography (OCT) images. It only requires very few labeled and a number of unlabeled OCT images for model training. In experiments, we have evaluated the proposed method on two popular datasets. With only 80 labeled OCT images, the proposed method can achieve classification accuracies of 0.942 and 0.936, sensitivities of 0.942 and 0.936, specificities of 0.971 and 0.979, and AUCs (Area under the ROC Curves) of 0.997 and 0.993 on the two datasets, respectively. When comparing with human experts, it achieves expert level with 80 labeled OCT images and outperforms four out of six experts with 200 labeled OCT images. Furthermore, we also adopt the Gradient Class Activation Map (Grad-CAM) method to visualize the key regions that the proposed method focuses on when making predictions. It shows that the proposed method can accurately recognize the key patterns of the input OCT images when predicting retinopathy.
Collapse
Affiliation(s)
- Yuemei Luo
- School of Electrical and Electronic
Engineering, Nanyang Technological
University, Singapore, 639798, Singapore
| | - Qing Xu
- Institute for Infocomm
Research, Agency for Science, Technology and Research
(A*STAR), Singapore, 138632, Singapore
| | - Ruibing Jin
- Institute for Infocomm
Research, Agency for Science, Technology and Research
(A*STAR), Singapore, 138632, Singapore
| | - Min Wu
- Institute for Infocomm
Research, Agency for Science, Technology and Research
(A*STAR), Singapore, 138632, Singapore
| | - Linbo Liu
- School of Electrical and Electronic
Engineering, Nanyang Technological
University, Singapore, 639798, Singapore
- School of Chemical and Biomedical
Engineering, Nanyang Technological
University, Singapore, 637459, Singapore
| |
Collapse
|
40
|
Wang C, Gan M. Tissue self-attention network for the segmentation of optical coherence tomography images on the esophagus. BIOMEDICAL OPTICS EXPRESS 2021; 12:2631-2646. [PMID: 34123493 PMCID: PMC8176794 DOI: 10.1364/boe.419809] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 04/01/2021] [Accepted: 04/01/2021] [Indexed: 05/06/2023]
Abstract
Automatic segmentation of layered tissue is the key to esophageal optical coherence tomography (OCT) image processing. With the advent of deep learning techniques, frameworks based on a fully convolutional network are proved to be effective in classifying pixels on images. However, due to speckle noise and unfavorable imaging conditions, the esophageal tissue relevant to the diagnosis is not always easy to identify. An effective approach to address this problem is extracting more powerful feature maps, which have similar expressions for pixels in the same tissue and show discriminability from those from different tissues. In this study, we proposed a novel framework, called the tissue self-attention network (TSA-Net), which introduces the self-attention mechanism for esophageal OCT image segmentation. The self-attention module in the network is able to capture long-range context dependencies from the image and analyzes the input image in a global view, which helps to cluster pixels in the same tissue and reveal differences of different layers, thus achieving more powerful feature maps for segmentation. Experiments have visually illustrated the effectiveness of the self-attention map, and its advantages over other deep networks were also discussed.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| |
Collapse
|
41
|
Keenan TDL, Chakravarthy U, Loewenstein A, Chew EY, Schmidt-Erfurth U. Automated Quantitative Assessment of Retinal Fluid Volumes as Important Biomarkers in Neovascular Age-Related Macular Degeneration. Am J Ophthalmol 2021; 224:267-281. [PMID: 33359681 PMCID: PMC8058226 DOI: 10.1016/j.ajo.2020.12.012] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 12/09/2020] [Accepted: 12/10/2020] [Indexed: 02/07/2023]
Abstract
PURPOSE To evaluate retinal fluid volume data extracted from optical coherence tomography (OCT) scans by artificial intelligence algorithms in the treatment of neovascular age-related macular degeneration (NV-AMD). DESIGN Perspective. METHODS A review was performed of retinal image repository datasets from diverse clinical settings. SETTINGS Clinical trial (HARBOR) and trial follow-on (Age-Related Eye Disease Study 2 10-year Follow-On); real-world (Belfast and Tel-Aviv tertiary centers). PATIENTS 24,362 scans of 1,095 eyes (HARBOR); 4,673 of 880 (Belfast); 1,470 of 132 (Tel-Aviv); 511 of 511 (Age-Related Eye Disease Study 2 10-year Follow-On). ObservationProcedures: Vienna Fluid Monitor or Notal OCT Analyzer applied to macular cube scans. OutcomeMeasures: Intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED) volumes. RESULTS The fluid volumes measured in neovascular AMD were expressed efficiently in nanoliters. Large ranges that differed by population were observed at the treatment-naïve stage: 0-3,435 nL (IRF), 0-5,018 nL (SRF), and 0-10,022 nL (PED). Mean volumes decreased rapidly and consistently with anti-vascular endothelial growth factor therapy. During maintenance therapy, mean IRF volumes were highest in Tel-Aviv (100 nL), lower in Belfast and HARBOR-Pro Re Nata, and lowest in HARBOR-monthly (21 nL). Mean SRF volumes were low in all: 30 nL (HARBOR-monthly) and 48-49 nL (others). CONCLUSIONS Quantitative measures of IRF, SRF, and PED are important biomarkers in NV-AMD. Accurate volumes can be extracted efficiently from OCT scans by artificial intelligence algorithms to guide the treatment of exudative macular diseases. Automated fluid monitoring identifies fluid characteristics in different NV-AMD populations at baseline and during follow-up. For consistency between studies, we propose the nanoliter as a convenient unit. We explore the advantages of using these quantitative metrics in clinical practice and research.
Collapse
Affiliation(s)
- Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA.
| | - Usha Chakravarthy
- Centre for Experimental Medicine, Dentistry and Biomedical Sciences, Queen's University of Belfast, Belfast, United Kingdom
| | - Anat Loewenstein
- Tel Aviv Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Ursula Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Christian Doppler Laboratory for Ophthalmic Image Analyses (OPTIMA), Medical University of Vienna, Vienna, Austria
| |
Collapse
|
42
|
Yang R, Yu Y. Artificial Convolutional Neural Network in Object Detection and Semantic Segmentation for Medical Imaging Analysis. Front Oncol 2021; 11:638182. [PMID: 33768000 PMCID: PMC7986719 DOI: 10.3389/fonc.2021.638182] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 02/11/2021] [Indexed: 12/18/2022] Open
Abstract
In the era of digital medicine, a vast number of medical images are produced every day. There is a great demand for intelligent equipment for adjuvant diagnosis to assist medical doctors with different disciplines. With the development of artificial intelligence, the algorithms of convolutional neural network (CNN) progressed rapidly. CNN and its extension algorithms play important roles on medical imaging classification, object detection, and semantic segmentation. While medical imaging classification has been widely reported, the object detection and semantic segmentation of imaging are rarely described. In this review article, we introduce the progression of object detection and semantic segmentation in medical imaging study. We also discuss how to accurately define the location and boundary of diseases.
Collapse
Affiliation(s)
| | - Yingyan Yu
- Department of General Surgery of Ruijin Hospital, Shanghai Institute of Digestive Surgery and Shanghai Key Laboratory for Gastric Neoplasms, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
43
|
Müller PL, Liefers B, Treis T, Rodrigues FG, Olvera-Barrios A, Paul B, Dhingra N, Lotery A, Bailey C, Taylor P, Sánchez CI, Tufail A. Reliability of Retinal Pathology Quantification in Age-Related Macular Degeneration: Implications for Clinical Trials and Machine Learning Applications. Transl Vis Sci Technol 2021; 10:4. [PMID: 34003938 PMCID: PMC7938003 DOI: 10.1167/tvst.10.3.4] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 12/22/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To investigate the interreader agreement for grading of retinal alterations in age-related macular degeneration (AMD) using a reading center setting. Methods In this cross-sectional case series, spectral-domain optical coherence tomography (OCT; Topcon 3D OCT, Tokyo, Japan) scans of 112 eyes of 112 patients with neovascular AMD (56 treatment naive, 56 after three anti-vascular endothelial growth factor injections) were analyzed by four independent readers. Imaging features specific for AMD were annotated using a novel custom-built annotation platform. Dice score, Bland-Altman plots, coefficients of repeatability, coefficients of variation, and intraclass correlation coefficients were assessed. Results Loss of ellipsoid zone, pigment epithelium detachment, subretinal fluid, and drusen were the most abundant features in our cohort. Subretinal fluid, intraretinal fluid, hypertransmission, descent of the outer plexiform layer, and pigment epithelium detachment showed highest interreader agreement, while detection and measures of loss of ellipsoid zone and retinal pigment epithelium were more variable. The agreement on the size and location of the respective annotation was more consistent throughout all features. Conclusions The interreader agreement depended on the respective OCT-based feature. A selection of reliable features might provide suitable surrogate markers for disease progression and possible treatment effects focusing on different disease stages. Translational Relevance This might give opportunities for a more time- and cost-effective patient assessment and improved decision making as well as have implications for clinical trials and training machine learning algorithms.
Collapse
Affiliation(s)
- Philipp L. Müller
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
- Department of Ophthalmology, University of Bonn, Bonn, Germany
| | - Bart Liefers
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Tim Treis
- BioQuant, University of Heidelberg, Heidelberg, Germany
| | - Filipa Gomes Rodrigues
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Bobby Paul
- Barking, Havering and Redbridge University Hospitals NHS Trust, Romford, UK
| | | | - Andrew Lotery
- University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Clare Bailey
- University Hospitals Bristol NHS Foundation Trust, Bristol, UK
| | - Paul Taylor
- Institute of Health Informatics, University College London, London, UK
| | - Clarisa I. Sánchez
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, The Netherlands
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
44
|
Ehlers JP, Clark J, Uchida A, Figueiredo N, Babiuch A, Talcott KE, Lunasco L, Le TK, Meng X, Hu M, Reese J, Srivastava SK. Longitudinal Higher-Order OCT Assessment of Quantitative Fluid Dynamics and the Total Retinal Fluid Index in Neovascular AMD. Transl Vis Sci Technol 2021; 10:29. [PMID: 34003963 PMCID: PMC7995350 DOI: 10.1167/tvst.10.3.29] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 01/17/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this study was to evaluate the feasibility of assessing quantitative longitudinal fluid dynamics and total retinal fluid indices (TRFIs) with higher-order optical coherence tomography (OCT) for neovascular age-related macular degeneration (nAMD). Methods A post hoc image analysis study was performed using the phase II OSPREY clinical trial comparing brolucizumab and aflibercept in nAMD. Higher-order OCT analysis using a machine learning-enabled fluid feature extraction platform was used to segment intraretinal fluid (IRF) and subretinal fluid (SRF) volumetric components. TRFI, the proportion of fluid volume against total retinal volume, was calculated. Longitudinal fluid metrics were evaluated for the following groups: all subjects (i.e. treatment agnostic), brolucizumab, and aflibercept. Results Mean IRF and SRF volumes were significantly reduced from baseline at each timepoint for all groups. Fluid feature extraction allowed high-resolution assessment of quantitative fluid burden. A greater proportion of brolucizumab participants achieved true zero and minimal fluid (total fluid volume between 0.0001 and 0.001mm3) versus aflibercept participants at week 40. True zero fluid during q12 brolucizumab dosing was achieved in 36.6% to 38.5%, similar to the 25.6% to 38.5% during the corresponding q8 aflibercept cycles. TRFI was significantly reduced from baseline in all groups. Conclusions Higher-order OCT analysis demonstrates the feasibility of fluid feature extraction and longitudinal volumetric fluid burden and TRFI characterization in nAMD, supporting a unique opportunity for fluid burden assessment and the impact on outcomes. Translational Relevance Detection and characterization of disease activity is vital for optimal treatment of nAMD. Longitudinal assessment of fluid dynamics and the TRFI provide important proof of concept for future automated tools in characterizing disease activity.
Collapse
Affiliation(s)
- Justis P. Ehlers
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Julie Clark
- Formerly Novartis Pharmaceuticals, East Hanover, NJ, USA
| | - Atsuro Uchida
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Natalia Figueiredo
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Amy Babiuch
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Katherine E. Talcott
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Leina Lunasco
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Thuy K. Le
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | - Ming Hu
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, OH, USA
| | - Jamie Reese
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sunil K. Srivastava
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cleveland Clinic, Cleveland, OH, USA
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
45
|
Cao J, You K, Jin K, Lou L, Wang Y, Chen M, Pan X, Shao J, Su Z, Wu J, Ye J. Prediction of response to anti-vascular endothelial growth factor treatment in diabetic macular oedema using an optical coherence tomography-based machine learning method. Acta Ophthalmol 2021; 99:e19-e27. [PMID: 32573116 DOI: 10.1111/aos.14514] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Accepted: 05/24/2020] [Indexed: 12/24/2022]
Abstract
PURPOSE To predict the anti-vascular endothelial growth factor (VEGF) therapeutic response of diabetic macular oedema (DME) patients from optical coherence tomography (OCT) at the initiation stage of treatment using a machine learning-based self-explainable system. METHODS A total of 712 DME patients were included and classified into poor and good responder groups according to central macular thickness decrease after three consecutive injections. Machine learning models were constructed to make predictions based on related features extracted automatically using deep learning algorithms from OCT scans at baseline. Five-fold cross-validation was applied to optimize and evaluate the models. The model with the best performance was then compared with two ophthalmologists. Feature importance was further investigated, and a Wilcoxon rank-sum test was performed to assess the difference of a single feature between two groups. RESULTS Of 712 patients, 294 were poor responders and 418 were good responders. The best performance for the prediction task was achieved by random forest (RF), with sensitivity, specificity and area under the receiver operating characteristic curve of 0.900, 0.851 and 0.923. Ophthalmologist 1 and ophthalmologist 2 reached sensitivity of 0.775 and 0.750, and specificity of 0.716 and 0.821, respectively. The sum of hyperreflective dots was found to be the most relevant feature for prediction. CONCLUSION An RF classifier was constructed to predict the treatment response of anti-VEGF from OCT images of DME patients with high accuracy. The algorithm contributes to predicting treatment requirements in advance and provides an optimal individualized therapeutic regimen.
Collapse
Affiliation(s)
- Jing Cao
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Kun You
- Hangzhou Truth Medical Technology Ltd Hangzhou China
| | - Kai Jin
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Lixia Lou
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Yao Wang
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Menglu Chen
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Xiangji Pan
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Ji Shao
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Zhaoan Su
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| | - Jian Wu
- College of Computer Science and Technology Zhejiang University Hangzhou China
| | - Juan Ye
- Department of Ophthalmology College of Medicine The Second Affiliated Hospital of Zhejiang University Hangzhou China
| |
Collapse
|
46
|
Keenan TDL, Clemons TE, Domalpally A, Elman MJ, Havilio M, Agrón E, Benyamini G, Chew EY. Retinal Specialist versus Artificial Intelligence Detection of Retinal Fluid from OCT: Age-Related Eye Disease Study 2: 10-Year Follow-On Study. Ophthalmology 2021; 128:100-109. [PMID: 32598950 PMCID: PMC8371700 DOI: 10.1016/j.ophtha.2020.06.038] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Revised: 06/16/2020] [Accepted: 06/18/2020] [Indexed: 01/14/2023] Open
Abstract
PURPOSE To evaluate the performance of retinal specialists in detecting retinal fluid presence in spectral domain OCT (SD-OCT) scans from eyes with age-related macular degeneration (AMD) and compare performance with an artificial intelligence algorithm. DESIGN Prospective comparison of retinal fluid grades from human retinal specialists and the Notal OCT Analyzer (NOA) on SD-OCT scans from 2 common devices. PARTICIPANTS A total of 1127 eyes of 651 Age-Related Eye Disease Study 2 10-year Follow-On Study (AREDS2-10Y) participants with SD-OCT scans graded by reading center graders (as the ground truth). METHODS The AREDS2-10Y investigators graded each SD-OCT scan for the presence/absence of intraretinal and subretinal fluid. Separately, the same scans were graded by the NOA. MAIN OUTCOME MEASURES Accuracy (primary), sensitivity, specificity, precision, and F1-score. RESULTS Of the 1127 eyes, retinal fluid was present in 32.8%. For detecting retinal fluid, the investigators had an accuracy of 0.805 (95% confidence interval [CI], 0.780-0.828), a sensitivity of 0.468 (95% CI, 0.416-0.520), a specificity of 0.970 (95% CI, 0.955-0.981). The NOA metrics were 0.851 (95% CI, 0.829-0.871), 0.822 (95% CI, 0.779-0.859), 0.865 (95% CI, 0.839-0.889), respectively. For detecting intraretinal fluid, the investigator metrics were 0.815 (95% CI, 0.792-0.837), 0.403 (95% CI, 0.349-0.459), and 0.978 (95% CI, 0.966-0.987); the NOA metrics were 0.877 (95% CI, 0.857-0.896), 0.763 (95% CI, 0.713-0.808), and 0.922 (95% CI, 0.902-0.940), respectively. For detecting subretinal fluid, the investigator metrics were 0.946 (95% CI, 0.931-0.958), 0.583 (95% CI, 0.471-0.690), and 0.973 (95% CI, 0.962-0.982); the NOA metrics were 0.863 (95% CI, 0.842-0.882), 0.940 (95% CI, 0.867-0.980), and 0.857 (95% CI, 0.835-0.877), respectively. CONCLUSIONS In this large and challenging sample of SD-OCT scans obtained with 2 common devices, retinal specialists had imperfect accuracy and low sensitivity in detecting retinal fluid. This was particularly true for intraretinal fluid and difficult cases (with lower fluid volumes appearing on fewer B-scans). Artificial intelligence-based detection achieved a higher level of accuracy. This software tool could assist physicians in detecting retinal fluid, which is important for diagnostic, re-treatment, and prognostic tasks.
Collapse
Affiliation(s)
- Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| | | | - Amitha Domalpally
- Fundus Photograph Reading Center, University of Wisconsin, Madison, Wisconsin
| | | | | | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | | | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
47
|
Song Z, Xu L, Wang J, Rasti R, Sastry A, Li JD, Raynor W, Izatt JA, Toth CA, Vajzovic L, Deng B, Farsiu S. Lightweight Learning-Based Automatic Segmentation of Subretinal Blebs on Microscope-Integrated Optical Coherence Tomography Images. Am J Ophthalmol 2021; 221:154-168. [PMID: 32707207 PMCID: PMC8120705 DOI: 10.1016/j.ajo.2020.07.020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 07/08/2020] [Accepted: 07/09/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE Subretinal injections of therapeutics are commonly used to treat ocular diseases. Accurate dosing of therapeutics at target locations is crucial but difficult to achieve using subretinal injections due to leakage, and there is no method available to measure the volume of therapeutics successfully administered to the subretinal location during surgery. Here, we introduce the first automatic method for quantifying the volume of subretinal blebs, using porcine eyes injected with Ringer's lactate solution as samples. DESIGN Ex vivo animal study. METHODS Microscope-integrated optical coherence tomography was used to obtain 3D visualization of subretinal blebs in porcine eyes at Duke Eye Center. Two different injection phases were imaged and analyzed in 15 eyes (30 volumes), selected from a total of 37 eyes. The inclusion/exclusion criteria were set independently from the algorithm-development and testing team. A novel lightweight, deep learning-based algorithm was designed to segment subretinal bleb boundaries. A cross-validation method was used to avoid selection bias. An ensemble-classifier strategy was applied to generate final results for the test dataset. RESULTS The algorithm performs notably better than 4 other state-of-the-art deep learning-based segmentation methods, achieving an F1 score of 93.86 ± 1.17% and 96.90 ± 0.59% on the independent test data for entry and full blebs, respectively. CONCLUSION The proposed algorithm accurately segmented the volumetric boundaries of Ringer's lactate solution delivered into the subretinal space of porcine eyes with robust performance and real-time speed. This is the first step for future applications in computer-guided delivery of therapeutics into the subretinal space in human subjects.
Collapse
Affiliation(s)
- Zhenxi Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China; Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Liangyu Xu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Reza Rasti
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Ananth Sastry
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jianwei D Li
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - William Raynor
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Joseph A Izatt
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA; Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Cynthia A Toth
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA; Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Lejla Vajzovic
- Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA; Department of Ophthalmology, Duke University School of Medicine, Durham, North Carolina, USA.
| |
Collapse
|
48
|
Sarhan MH, Nasseri MA, Zapp D, Maier M, Lohmann CP, Navab N, Eslami A. Machine Learning Techniques for Ophthalmic Data Processing: A Review. IEEE J Biomed Health Inform 2020; 24:3338-3350. [PMID: 32750971 DOI: 10.1109/jbhi.2020.3012134] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.
Collapse
|
49
|
Loo J, Cai CX, Choong J, Chew EY, Friedlander M, Jaffe GJ, Farsiu S. Deep learning-based classification and segmentation of retinal cavitations on optical coherence tomography images of macular telangiectasia type 2. Br J Ophthalmol 2020; 106:396-402. [PMID: 33229343 DOI: 10.1136/bjophthalmol-2020-317131] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 10/15/2020] [Accepted: 10/30/2020] [Indexed: 11/04/2022]
Abstract
AIM To develop a fully automatic algorithm to segment retinal cavitations on optical coherence tomography (OCT) images of macular telangiectasia type 2 (MacTel2). METHODS The dataset consisted of 99 eyes from 67 participants enrolled in an international, multicentre, phase 2 MacTel2 clinical trial (NCT01949324). Each eye was imaged with spectral-domain OCT at three time points over 2 years. Retinal cavitations were manually segmented by a trained Reader and the retinal cavitation volume was calculated. Two convolutional neural networks (CNNs) were developed that operated in sequential stages. In the first stage, CNN1 classified whether a B-scan contained any retinal cavitations. In the second stage, CNN2 segmented the retinal cavitations in a B-scan. We evaluated the performance of the proposed method against alternative methods using several performance metrics and manual segmentations as the gold standard. RESULTS The proposed method was computationally efficient and accurately classified and segmented retinal cavitations on OCT images, with a sensitivity of 0.94, specificity of 0.80 and average Dice similarity coefficient of 0.94±0.07 across all time points. The proposed method produced measurements that were highly correlated with the manual measurements of retinal cavitation volume and change in retinal cavitation volume over time. CONCLUSION The proposed method will be useful to help clinicians quantify retinal cavitations, assess changes over time and further investigate the clinical significance of these early structural changes observed in MacTel2.
Collapse
Affiliation(s)
- Jessica Loo
- Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Cindy X Cai
- Ophthalmology, Duke Medicine, Durham, North Carolina, USA
| | - John Choong
- Ophthalmology, Duke Medicine, Durham, North Carolina, USA
| | - Emily Y Chew
- Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland, USA
| | - Martin Friedlander
- The Lowy Medical Research Institute, La Jolla, California, USA.,Molecular Medicine, The Scripps Research Institute, La Jolla, California, USA
| | - Glenn J Jaffe
- Ophthalmology, Duke Medicine, Durham, North Carolina, USA
| | - Sina Farsiu
- Biomedical Engineering, Duke University, Durham, North Carolina, USA.,Ophthalmology, Duke Medicine, Durham, North Carolina, USA
| |
Collapse
|
50
|
Zhong P, Wang J, Guo Y, Fu X, Wang R. Multiclass retinal disease classification and lesion segmentation in OCT B-scan images using cascaded convolutional networks. APPLIED OPTICS 2020; 59:10312-10320. [PMID: 33361962 DOI: 10.1364/ao.409414] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 10/24/2020] [Indexed: 06/12/2023]
Abstract
Disease classification and lesion segmentation of retinal optical coherence tomography images play important roles in ophthalmic computer-aided diagnosis. However, existing methods achieve the two tasks separately, which is insufficient for clinical application and ignores the internal relation of disease and lesion features. In this paper, a framework of cascaded convolutional networks is proposed to jointly classify retinal diseases and segment lesions. First, we adopt an auxiliary binary classification network to identify normal and abnormal images. Then a novel, to the best of our knowledge, U-shaped multi-task network, BDA-Net, combined with a bidirectional decoder and self-attention mechanism, is used to further analyze abnormal images. Experimental results show that the proposed method reaches an accuracy of 0.9913 in classification and achieves an improvement of around 3% in Dice compared to the baseline U-shaped model in segmentation.
Collapse
|