1
|
Moraes G, Struyven R, Wagner SK, Liu T, Chong D, Abbas A, Chopra R, Patel PJ, Balaskas K, Keenan TD, Keane PA. Quantifying Changes on OCT in Eyes Receiving Treatment for Neovascular Age-Related Macular Degeneration. OPHTHALMOLOGY SCIENCE 2024; 4:100570. [PMID: 39224530 PMCID: PMC11367487 DOI: 10.1016/j.xops.2024.100570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 06/24/2024] [Accepted: 06/24/2024] [Indexed: 09/04/2024]
Abstract
Purpose Application of artificial intelligence (AI) to macular OCT scans to segment and quantify volumetric change in anatomical and pathological features during intravitreal treatment for neovascular age-related macular degeneration (AMD). Design Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. Participants A total of 2115 eyes from 1801 patients starting anti-VEGF treatment between June 1, 2012, and June 30, 2017. Methods The Moorfields Eye Hospital neovascular AMD database was queried for first and second eyes receiving anti-VEGF treatment and had an OCT scan at baseline and 12 months. Follow-up scans were input into the AI system and volumes of OCT variables were studied at different time points and compared with baseline volume groups. Cross-sectional comparisons between time points were conducted using Mann-Whitney U test. Main Outcome Measures Volume outputs of the following variables were studied: intraretinal fluid, subretinal fluid, pigment epithelial detachment (PED), subretinal hyperreflective material (SHRM), hyperreflective foci, neurosensory retina, and retinal pigment epithelium. Results Mean volumes of analyzed features decreased significantly from baseline to both 4 and 12 months, in both first-treated and second-treated eyes. Pathological features that reflect exudation, including pure fluid components (intraretinal fluid and subretinal fluid) and those with fluid and fibrovascular tissue (PED and SHRM), displayed similar responses to treatment over 12 months. Mean PED and SHRM volumes showed less pronounced but also substantial decreases over the first 2 months, reaching a plateau postloading phase, and minimal change to 12 months. Both neurosensory retina and retinal pigment epithelium volumes showed gradual reductions over time, and were not as substantial as exudative features. Conclusions We report the results of a quantitative analysis of change in retinal segmented features over time, enabled by an AI segmentation system. Cross-sectional analysis at multiple time points demonstrated significant associations between baseline OCT-derived segmented features and the volume of biomarkers at follow-up. Demonstrating how certain OCT biomarkers progress with treatment and the impact of pretreatment retinal morphology on different structural volumes may provide novel insights into disease mechanisms and aid the personalization of care. Data will be made public for future studies. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Gabriella Moraes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Robbert Struyven
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Timing Liu
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - David Chong
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Abdallah Abbas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Reena Chopra
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Praveen J. Patel
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Tiarnan D.L. Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| |
Collapse
|
2
|
Xu J, Zhou F, Shen J, Yan Z, Wan C, Yao J. Automatic height measurement of central serous chorioretinopathy lesion using a deep learning and adaptive gradient threshold based cascading strategy. Comput Biol Med 2024; 177:108610. [PMID: 38820776 DOI: 10.1016/j.compbiomed.2024.108610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/23/2024] [Accepted: 05/11/2024] [Indexed: 06/02/2024]
Abstract
Accurately quantifying the height of central serous chorioretinopathy (CSCR) lesion is of great significance for assisting ophthalmologists in diagnosing CSCR and evaluating treatment efficacy. The manual measurement results dominated by single optical coherence tomography (OCT) B-scan image in clinical practice face the dilemma of weak reference, poor reproducibility, and experience dependence. In this context, this paper constructs two schemes: Scheme Ⅰ draws on the idea of ensemble learning, namely, integrating multiple models for locating starting key point in the height direction of lesion in the inference stage, which appropriately improves the performance of a single model. Scheme Ⅱ designs an adaptive gradient threshold (AGT) technique, followed by the construction of cascading strategy, which involves preliminary location of starting key point through deep learning, and then employs AGT for precise adjustment. This strategy not only achieves effective location for starting key point, but also significantly reduces the large appetite of deep learning model for training samples. Subsequently, AGT continues to play a crucial role in locating the terminal key point in the height direction of lesion, further demonstrating its feasibility and effectiveness. Quantitative and qualitative key point location experiments in the height direction of lesion on 1152 samples, as well as the final height measurement display, consistently conveys the superiority of the constructed schemes, especially the cascading strategy, expanding another potential tool for the comprehensive analysis of CSCR.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics &Astronautics, 210016, Nanjing, PR China.
| | - Fen Zhou
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Jianxin Shen
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics &Astronautics, 210016, Nanjing, PR China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, PR China
| | - Jin Yao
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China.
| |
Collapse
|
3
|
Jin Y, Yong S, Ke S, Zhang C, Liu Y, Wang J, Lu T, Sun Y, Wang H, Zhang J. Deep learning assisted fluid volume calculation for assessing anti-vascular endothelial growth factor effect in diabetic macular edema. Heliyon 2024; 10:e29775. [PMID: 38699726 PMCID: PMC11063453 DOI: 10.1016/j.heliyon.2024.e29775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 04/14/2024] [Accepted: 04/15/2024] [Indexed: 05/05/2024] Open
Abstract
Objective To develop an algorithm using deep learning methods to calculate the volume of intraretinal and subretinal fluid in optical coherence tomography (OCT) images for assessing diabetic macular edema (DME) patients' condition changes. Design Cross-sectional study. Participants Treatment-naive patients diagnosed with DME recruited from April 2020 to November 2021. Methods The deep learning network, which was built for autonomous segmentation utilizing an encoder-decoder network based on the U-Net architecture, was used to calculate the volume of intraretinal fluid (IRF) and subretinal fluid (SRF). The alterations of retinal vessel density and thickness, and the correlation between best-corrected visual acuity (BCVA) and OCT parameters were analyzed. Results 2,955 OCT images of fourteen eyes from DME patients with IRF and SRF who received anti-vascular endothelial growth factor (VEGF) agents were obtained. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve of the algorithm was 0.993 for IRF and 0.998 for SRF. The volumes of IRF and SRF were significantly decreased from 1.93 ± 0.58 /1.14 ± 0.25 mm3 (baseline) to 0.26 ± 0.13 /0.26 ± 0.18 mm3 (post-injection), respectively (p = 0.0170 for IRF, and p = 0.0004 for SRF). The Spearman correlation demonstrated that the reduction of IRF volume was negatively correlated with age (coefficient = -0.698, p = 0.006). Conclusion We developed a deep learning assisted fluid volume calculation algorithm with high sensitivity and specificity for assessing the volume of IRF and SRF in DME patients. Key words: deep learning; diabetic macular edema; optical coherence tomography.
Collapse
Affiliation(s)
- Yixiao Jin
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai Clinical Research Center for Eye Diseases, Shanghai Key Clinical Specialty, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Shuanghao Yong
- School of Electrical Engineering and Automation, Anhui University, Hefei, China
| | - Shi Ke
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai Clinical Research Center for Eye Diseases, Shanghai Key Clinical Specialty, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Chaoyang Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai Clinical Research Center for Eye Diseases, Shanghai Key Clinical Specialty, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yan Liu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai Clinical Research Center for Eye Diseases, Shanghai Key Clinical Specialty, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Jingyi Wang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai Clinical Research Center for Eye Diseases, Shanghai Key Clinical Specialty, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Ting Lu
- Department of Ophthalmology, Jiading Branch of Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yong Sun
- Department of Ophthalmology, Jiading Branch of Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haiyan Wang
- Department of Ocular Fundus, Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), Xi'an, Shaanxi, China
| | - Jingfa Zhang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai Clinical Research Center for Eye Diseases, Shanghai Key Clinical Specialty, Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| |
Collapse
|
4
|
Kulyabin M, Zhdanov A, Nikiforova A, Stepichev A, Kuznetsova A, Ronkin M, Borisov V, Bogachev A, Korotkich S, Constable PA, Maier A. OCTDL: Optical Coherence Tomography Dataset for Image-Based Deep Learning Methods. Sci Data 2024; 11:365. [PMID: 38605088 PMCID: PMC11009408 DOI: 10.1038/s41597-024-03182-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 03/22/2024] [Indexed: 04/13/2024] Open
Abstract
Optical coherence tomography (OCT) is a non-invasive imaging technique with extensive clinical applications in ophthalmology. OCT enables the visualization of the retinal layers, playing a vital role in the early detection and monitoring of retinal diseases. OCT uses the principle of light wave interference to create detailed images of the retinal microstructures, making it a valuable tool for diagnosing ocular conditions. This work presents an open-access OCT dataset (OCTDL) comprising over 2000 OCT images labeled according to disease group and retinal pathology. The dataset consists of OCT records of patients with Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID). The images were acquired with an Optovue Avanti RTVue XR using raster scanning protocols with dynamic scan length and image resolution. Each retinal b-scan was acquired by centering on the fovea and interpreted and cataloged by an experienced retinal specialist. In this work, we applied Deep Learning classification techniques to this new open-access dataset.
Collapse
Affiliation(s)
- Mikhail Kulyabin
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany.
| | - Aleksei Zhdanov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Anastasia Nikiforova
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Andrey Stepichev
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
| | - Anna Kuznetsova
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
| | - Mikhail Ronkin
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Vasilii Borisov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University Named after the First President of Russia B. N. Yeltsin, Mira, 32, Yekaterinburg, 620078, Russia
| | - Alexander Bogachev
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Sergey Korotkich
- Ophthalmosurgery Clinic "Professorskaya Plus", Vostochnaya, 30, Yekaterinburg, 620075, Russia
- Ural State Medical University, Repina, 3, Yekaterinburg, 620028, Russia
| | - Paul A Constable
- Flinders University, College of Nursing and Health Sciences, Caring Futures Institute, Adelaide, SA 5042, Australia
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Martensstr. 3, 91058, Erlangen, Germany
| |
Collapse
|
5
|
Pavithra K, Kumar P, Geetha M, Bhandary SV. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
6
|
Potapenko I, Thiesson B, Kristensen M, Hajari JN, Ilginis T, Fuchs J, Hamann S, la Cour M. Automated artificial intelligence-based system for clinical follow-up of patients with age-related macular degeneration. Acta Ophthalmol 2022; 100:927-936. [PMID: 35322564 PMCID: PMC9790353 DOI: 10.1111/aos.15133] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 02/05/2022] [Accepted: 03/12/2022] [Indexed: 12/30/2022]
Abstract
PURPOSE In this study, we investigate the potential of a novel artificial intelligence-based system for autonomous follow-up of patients treated for neovascular age-related macular degeneration (AMD). METHODS A temporal deep learning model was trained on a data set of 84 489 optical coherence tomography scans from AMD patients to recognize disease activity, and its performance was compared with a published non-temporal model trained on the same data (Acta Ophthalmol, 2021). An autonomous follow-up system was created by augmenting the AI model with deterministic logic to suggest treatment according to the observe-and-plan regimen. To validate the AI-based system, a data set comprising clinical decisions and imaging data from 200 follow-up consultations was collected prospectively. In each case, both the autonomous AI decision and original clinical decision were compared with an expert panel consensus. RESULTS The temporal AI model proved superior at detecting disease activity compared with the model without temporal input (area under the curve 0.900 (95% CI 0.894-0.906) and 0.857 (95% CI 0.846-0.867) respectively). The AI-based follow-up system could make an autonomous decision in 73% of the cases, 91.8% of which were in agreement with expert consensus. This was on par with the 87.7% agreement rate between decisions made in the clinic and expert consensus (p = 0.33). CONCLUSIONS The proposed autonomous follow-up system was shown to be safe and compliant with expert consensus on par with clinical practice. The system could in the future ease the pressure on public ophthalmology services from an increasing number of AMD patients.
Collapse
Affiliation(s)
- Ivan Potapenko
- Department of OphthalmologyRigshospitaletCopenhagenDenmark,Faculty of Health and Medical SciencesUniversity of CopenhagenCopenhagenDenmark
| | - Bo Thiesson
- Enversion A/SAarhusDenmark,Department of EngineeringAarhus UniversityAarhusDenmark
| | | | | | - Tomas Ilginis
- Department of OphthalmologyRigshospitaletCopenhagenDenmark
| | - Josefine Fuchs
- Department of OphthalmologyRigshospitaletCopenhagenDenmark
| | - Steffen Hamann
- Department of OphthalmologyRigshospitaletCopenhagenDenmark,Faculty of Health and Medical SciencesUniversity of CopenhagenCopenhagenDenmark
| | - Morten la Cour
- Department of OphthalmologyRigshospitaletCopenhagenDenmark,Faculty of Health and Medical SciencesUniversity of CopenhagenCopenhagenDenmark
| |
Collapse
|
7
|
|
8
|
Elizar E, Zulkifley MA, Muharar R, Zaman MHM, Mustaza SM. A Review on Multiscale-Deep-Learning Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:7384. [PMID: 36236483 PMCID: PMC9573412 DOI: 10.3390/s22197384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/23/2022] [Accepted: 09/24/2022] [Indexed: 06/16/2023]
Abstract
In general, most of the existing convolutional neural network (CNN)-based deep-learning models suffer from spatial-information loss and inadequate feature-representation issues. This is due to their inability to capture multiscale-context information and the exclusion of semantic information throughout the pooling operations. In the early layers of a CNN, the network encodes simple semantic representations, such as edges and corners, while, in the latter part of the CNN, the network encodes more complex semantic features, such as complex geometric shapes. Theoretically, it is better for a CNN to extract features from different levels of semantic representation because tasks such as classification and segmentation work better when both simple and complex feature maps are utilized. Hence, it is also crucial to embed multiscale capability throughout the network so that the various scales of the features can be optimally captured to represent the intended task. Multiscale representation enables the network to fuse low-level and high-level features from a restricted receptive field to enhance the deep-model performance. The main novelty of this review is the comprehensive novel taxonomy of multiscale-deep-learning methods, which includes details of several architectures and their strengths that have been implemented in the existing works. Predominantly, multiscale approaches in deep-learning networks can be classed into two categories: multiscale feature learning and multiscale feature fusion. Multiscale feature learning refers to the method of deriving feature maps by examining kernels over several sizes to collect a larger range of relevant features and predict the input images' spatial mapping. Multiscale feature fusion uses features with different resolutions to find patterns over short and long distances, without a deep network. Additionally, several examples of the techniques are also discussed according to their applications in satellite imagery, medical imaging, agriculture, and industrial and manufacturing systems.
Collapse
Affiliation(s)
- Elizar Elizar
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
- Department of Electrical and Computer Engineering, Faculty of Engineering, Universitas Syiah Kuala, Kopelma Darussalam 23111, Indonesia
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Rusdha Muharar
- Department of Electrical and Computer Engineering, Faculty of Engineering, Universitas Syiah Kuala, Kopelma Darussalam 23111, Indonesia
| | - Mohd Hairi Mohd Zaman
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Seri Mastura Mustaza
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| |
Collapse
|
9
|
Tang W, Ye Y, Chen X, Shi F, Xiang D, Chen Z, Zhu W. Multi-class retinal fluid joint segmentation based on cascaded convolutional neural networks. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 05/25/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Retinal fluid mainly includes intra-retinal fluid (IRF), sub-retinal fluid (SRF) and pigment epithelial detachment (PED), whose accurate segmentation in optical coherence tomography (OCT) image is of great importance to the diagnosis and treatment of the relative fundus diseases. Approach. In this paper, a novel two-stage multi-class retinal fluid joint segmentation framework based on cascaded convolutional neural networks is proposed. In the pre-segmentation stage, a U-shape encoder–decoder network is adopted to acquire the retinal mask and generate a retinal relative distance map, which can provide the spatial prior information for the next fluid segmentation. In the fluid segmentation stage, an improved context attention and fusion network based on context shrinkage encode module and multi-scale and multi-category semantic supervision module (named as ICAF-Net) is proposed to jointly segment IRF, SRF and PED. Main results. the proposed segmentation framework was evaluated on the dataset of RETOUCH challenge. The average Dice similarity coefficient, intersection over union and accuracy (Acc) reach 76.39%, 64.03% and 99.32% respectively. Significance. The proposed framework can achieve good performance in the joint segmentation of multi-class fluid in retinal OCT images and outperforms some state-of-the-art segmentation networks.
Collapse
|
10
|
Xing G, Chen L, Wang H, Zhang J, Sun D, Xu F, Lei J, Xu X. Multi-Scale Pathological Fluid Segmentation in OCT With a Novel Curvature Loss in Convolutional Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1547-1559. [PMID: 35015634 DOI: 10.1109/tmi.2022.3142048] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The segmentation of pathological fluid lesions in optical coherence tomography (OCT), including intraretinal fluid, subretinal fluid, and pigment epithelial detachment, is of great importance for the diagnosis and treatment of various eye diseases such as neovascular age-related macular degeneration and diabetic macular edema. Although significant progress has been achieved with the rapid development of fully convolutional neural networks (FCN) in recent years, some important issues remain unsolved. First, pathological fluid lesions in OCT show large variations in location, size, and shape, imposing challenges on the design of FCN architecture. Second, fluid lesions should be continuous regions without holes inside. But the current architectures lack the capability to preserve the shape prior information. In this study, we introduce an FCN architecture for the simultaneous segmentation of three types of pathological fluid lesions in OCT. First, attention gate and spatial pyramid pooling modules are employed to improve the ability of the network to extract multi-scale objects. Then, we introduce a novel curvature regularization term in the loss function to incorporate shape prior information. The proposed method was extensively evaluated on public and clinical datasets with significantly improved performance compared with the state-of-the-art methods.
Collapse
|
11
|
Recent Advanced Deep Learning Architectures for Retinal Fluid Segmentation on Optical Coherence Tomography Images. SENSORS 2022; 22:s22083055. [PMID: 35459040 PMCID: PMC9029682 DOI: 10.3390/s22083055] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022]
Abstract
With non-invasive and high-resolution properties, optical coherence tomography (OCT) has been widely used as a retinal imaging modality for the effective diagnosis of ophthalmic diseases. The retinal fluid is often segmented by medical experts as a pivotal biomarker to assist in the clinical diagnosis of age-related macular diseases, diabetic macular edema, and retinal vein occlusion. In recent years, the advanced machine learning methods, such as deep learning paradigms, have attracted more and more attention from academia in the retinal fluid segmentation applications. The automatic retinal fluid segmentation based on deep learning can improve the semantic segmentation accuracy and efficiency of macular change analysis, which has potential clinical implications for ophthalmic pathology detection. This article summarizes several different deep learning paradigms reported in the up-to-date literature for the retinal fluid segmentation in OCT images. The deep learning architectures include the backbone of convolutional neural network (CNN), fully convolutional network (FCN), U-shape network (U-Net), and the other hybrid computational methods. The article also provides a survey on the prevailing OCT image datasets used in recent retinal segmentation investigations. The future perspectives and some potential retinal segmentation directions are discussed in the concluding context.
Collapse
|
12
|
OCT Retinal and Choroidal Layer Instance Segmentation Using Mask R-CNN. SENSORS 2022; 22:s22052016. [PMID: 35271165 PMCID: PMC8914986 DOI: 10.3390/s22052016] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 11/16/2022]
Abstract
Optical coherence tomography (OCT) of the posterior segment of the eye provides high-resolution cross-sectional images that allow visualization of individual layers of the posterior eye tissue (the retina and choroid), facilitating the diagnosis and monitoring of ocular diseases and abnormalities. The manual analysis of retinal OCT images is a time-consuming task; therefore, the development of automatic image analysis methods is important for both research and clinical applications. In recent years, deep learning methods have emerged as an alternative method to perform this segmentation task. A large number of the proposed segmentation methods in the literature focus on the use of encoder–decoder architectures, such as U-Net, while other architectural modalities have not received as much attention. In this study, the application of an instance segmentation method based on region proposal architecture, called the Mask R-CNN, is explored in depth in the context of retinal OCT image segmentation. The importance of adequate hyper-parameter selection is examined, and the performance is compared with commonly used techniques. The Mask R-CNN provides a suitable method for the segmentation of OCT images with low segmentation boundary errors and high Dice coefficients, with segmentation performance comparable with the commonly used U-Net method. The Mask R-CNN has the advantage of a simpler extraction of the boundary positions, especially avoiding the need for a time-consuming graph search method to extract boundaries, which reduces the inference time by 2.5 times compared to U-Net, while segmenting seven retinal layers.
Collapse
|
13
|
Wang J, He Y, Fang W, Chen Y, Li W, Shi G. Unsupervised domain adaptation model for lesion detection in retinal OCT images. Phys Med Biol 2021; 66. [PMID: 34619675 DOI: 10.1088/1361-6560/ac2dd1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 10/07/2021] [Indexed: 11/12/2022]
Abstract
Background and objective.Optical coherence tomography (OCT) is one of the most used retinal imaging modalities in the clinic as it can provide high-resolution anatomical images. The huge number of OCT images has significantly advanced the development of deep learning methods for automatic lesion detection to ease the doctor's workload. However, it has been frequently revealed that the deep neural network model has difficulty handling the domain discrepancies, which widely exist in medical images captured from different devices. Many works have been proposed to solve the domain shift issue in deep learning tasks such as disease classification and lesion segmentation, but few works focused on lesion detection, especially for OCT images.Methods.In this work, we proposed a faster-RCNN based, unsupervised domain adaptation model to address the lesion detection task in cross-device retinal OCT images. The domain shift is minimized by reducing the image-level shift and instance-level shift at the same time. We combined a domain classifier with a Wasserstein distance critic to align the shifts at each level.Results.The model was tested on two sets of OCT image data captured from different devices, obtained an average accuracy improvement of more than 8% over the method without domain adaptation, and outperformed other comparable domain adaptation methods.Conclusion.The results demonstrate the proposed model is more effective in reducing the domain shift than advanced methods.
Collapse
Affiliation(s)
- Jing Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, People's Republic of China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou 215163, People's Republic of China
| | - Yi He
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, People's Republic of China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou 215163, People's Republic of China
| | - Wangyi Fang
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, People's Republic of China.,Key Laboratory of Myopia of State Health Ministry, and Key Laboratory of Visual Impairment and Restoration of Shanghai, People's Republic of China
| | - Yiwei Chen
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, People's Republic of China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou 215163, People's Republic of China
| | - Wanyue Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, People's Republic of China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou 215163, People's Republic of China
| | - Guohua Shi
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, People's Republic of China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, People's Republic of China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou 215163, People's Republic of China
| |
Collapse
|
14
|
Zheng B, Wu MN, Zhu SJ, Zhou HX, Hao XL, Fei FQ, Jia Y, Wu J, Yang WH, Pan XP. Attitudes of medical workers in China toward artificial intelligence in ophthalmology: a comparative survey. BMC Health Serv Res 2021; 21:1067. [PMID: 34627239 PMCID: PMC8501607 DOI: 10.1186/s12913-021-07044-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 09/17/2021] [Indexed: 12/20/2022] Open
Abstract
Background In the development of artificial intelligence in ophthalmology, the ophthalmic AI-related recognition issues are prominent, but there is a lack of research into people’s familiarity with and their attitudes toward ophthalmic AI. This survey aims to assess medical workers’ and other professional technicians’ familiarity with, attitudes toward, and concerns about AI in ophthalmology. Methods This is a cross-sectional study design study. An electronic questionnaire was designed through the app Questionnaire Star, and was sent to respondents through WeChat, China’s version of Facebook or WhatsApp. The participation was voluntary and anonymous. The questionnaire consisted of four parts, namely the respondents’ background, their basic understanding of AI, their attitudes toward AI, and their concerns about AI. A total of 562 respondents were counted, with 562 valid questionnaires returned. The results of the questionnaires are displayed in an Excel 2003 form. Results There were 291 medical workers and 271 other professional technicians completed the questionnaire. About 1/3 of the respondents understood AI and ophthalmic AI. The percentages of people who understood ophthalmic AI among medical workers and other professional technicians were about 42.6 % and 15.6 %, respectively. About 66.0 % of the respondents thought that AI in ophthalmology would partly replace doctors, about 59.07 % having a relatively high acceptance level of ophthalmic AI. Meanwhile, among those with AI in ophthalmology application experiences (30.6 %), above 70 % of respondents held a full acceptance attitude toward AI in ophthalmology. The respondents expressed medical ethics concerns about AI in ophthalmology. And among the respondents who understood AI in ophthalmology, almost all the people said that there was a need to increase the study of medical ethics issues in the ophthalmic AI field. Conclusions The survey results revealed that the medical workers had a higher understanding level of AI in ophthalmology than other professional technicians, making it necessary to popularize ophthalmic AI education among other professional technicians. Most of the respondents did not have any experience in ophthalmic AI but generally had a relatively high acceptance level of AI in ophthalmology, and there was a need to strengthen research into medical ethics issues.
Collapse
Affiliation(s)
- Bo Zheng
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province
| | - Mao-Nian Wu
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,College of Computer and Information, Hehai University, 210013, Nanjing, China, Jiangsu
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province
| | - Hong-Xia Zhou
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province.,College of Computer and Information, Hehai University, 210013, Nanjing, China, Jiangsu
| | - Xiu-Lan Hao
- School of Information Engineering, Huzhou University, Zhejiang, 313000, Huzhou, China.,Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, 313000, Huzhou, China, Zhejiang Province
| | - Fang-Qin Fei
- Department of Endocrinology, First Affiliated Hospital of Huzhou University, 313000, Huzhou, China, Zhejiang
| | - Yun Jia
- School of Medicine, Huzhou University, 313000, Huzhou, China, Zhejiang
| | - Jian Wu
- Zhejiang University Real Doctor AI Research Center, 310000, Hangzhou, Zhejiang, P.R. China
| | - Wei-Hua Yang
- Affiliated Eye Hospital of Nanjing Medical University, No.138 Hanzhong Road, Gulou District, 210029, Nanjing, Jiangsu, China.
| | - Xue-Ping Pan
- First People's Hospital of Huzhou, 313000, Huzhou, China, Zhejiang
| |
Collapse
|
15
|
DGFAU-Net: Global feature attention upsampling network for medical image segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05908-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
16
|
Yang X, Zhang Y, Lo B, Wu D, Liao H, Zhang YT. DBAN: Adversarial Network With Multi-Scale Features for Cardiac MRI Segmentation. IEEE J Biomed Health Inform 2021; 25:2018-2028. [PMID: 33006934 DOI: 10.1109/jbhi.2020.3028463] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
With the development of medical artificial intelligence, automatic magnetic resonance image (MRI) segmentation method is quite desirable. Inspired by the power of deep neural networks, a novel deep adversarial network, dilated block adversarial network (DBAN), is proposed to perform left ventricle, right ventricle, and myocardium segmentation in short-axis cardiac MRI. DBAN contains a segmentor along with a discriminator. In the segmentor, the dilated block (DB) is proposed to capture, and aggregate multi-scale features. The segmentor can produce segmentation probability maps while the discriminator can differentiate the segmentation probability map, and the ground truth at the pixel level. In addition, confidence probability maps generated by the discriminator can guide the segmentor to modify segmentation probability maps. Extensive experiments demonstrate that DBAN has achieved the state-of-the-art performance on the ACDC dataset. Quantitative analyses indicate that cardiac function indices from DBAN are similar to those from clinical experts. Therefore, DBAN can be a potential candidate for short-axis cardiac MRI segmentation in clinical applications.
Collapse
|
17
|
Pawan SJ, Sankar R, Jain A, Jain M, Darshan DV, Anoop BN, Kothari AR, Venkatesan M, Rajan J. Capsule Network-based architectures for the segmentation of sub-retinal serous fluid in optical coherence tomography images of central serous chorioretinopathy. Med Biol Eng Comput 2021; 59:1245-1259. [PMID: 33988817 DOI: 10.1007/s11517-021-02364-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 04/18/2021] [Indexed: 12/28/2022]
Abstract
Central serous chorioretinopathy (CSCR) is a chorioretinal disorder of the eye characterized by serous detachment of the neurosensory retina at the posterior pole of the eye. CSCR results from the accumulation of subretinal fluid (SRF) due to idiopathic defects at the level of the retinal pigment epithelial (RPE) that allows serous fluid from the choriocapillaris to diffuse into the subretinal space between RPE and neurosensory retinal layers. This condition is presently investigated by clinicians using invasive angiography or non-invasive optical coherence tomography (OCT) imaging. OCT images provide a representation of the fluid underlying the retina, and in the absence of automated segmentation tools, currently only a qualitative assessment of the same is used to follow the progression of the disease. Automated segmentation of the SRF can prove to be extremely useful for the assessment of progression and for the timely management of CSCR. In this paper, we adopt an existing architecture called SegCaps, which is based on the recently introduced Capsule Networks concept, for the segmentation of SRF from CSCR OCT images. Furthermore, we propose an enhancement to SegCaps, which we have termed as DRIP-Caps, that utilizes the concepts of Dilation, Residual Connections, Inception Blocks, and Capsule Pooling to address the defined problem. The proposed model outperforms the benchmark UNet architecture while reducing the number of trainable parameters by 54.21%. Moreover, it reduces the computation complexity of SegCaps by reducing the number of trainable parameters by 37.85%, with competitive performance. The experiments demonstrate the generalizability of the proposed model, as evidenced by its remarkable performance even with a limited number of training samples. Graphical abstract is mandatory please provide.
Collapse
Affiliation(s)
- S J Pawan
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Rahul Sankar
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - Anubhav Jain
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - Mahir Jain
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - D V Darshan
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - B N Anoop
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | | | - M Venkatesan
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| | - Jeny Rajan
- Department of Computer Science and Engineering, National Institute of Technology Karnataka, Surathkal, India
| |
Collapse
|
18
|
Xing R, Niu S, Gao X, Liu T, Fan W, Chen Y. Weakly supervised serous retinal detachment segmentation in SD-OCT images by two-stage learning. BIOMEDICAL OPTICS EXPRESS 2021; 12:2312-2327. [PMID: 33996231 PMCID: PMC8086451 DOI: 10.1364/boe.416167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/12/2021] [Accepted: 03/16/2021] [Indexed: 06/12/2023]
Abstract
Automated lesion segmentation is one of the important tasks for the quantitative assessment of retinal diseases in SD-OCT images. Recently, deep convolutional neural networks (CNN) have shown promising advancements in the field of automated image segmentation, whereas they always benefit from large-scale datasets with high-quality pixel-wise annotations. Unfortunately, obtaining accurate annotations is expensive in both human effort and finance. In this paper, we propose a weakly supervised two-stage learning architecture to detect and further segment central serous chorioretinopathy (CSC) retinal detachment with only image-level annotations. Specifically, in the first stage, a Located-CNN is designed to detect the location of lesion regions in the whole SD-OCT retinal images, and highlight the distinguishing regions. To generate available a pseudo pixel-level label, the conventional level set method is employed to refine the distinguishing regions. In the second stage, we customize the active-contour loss function in deep networks to achieve the effective segmentation of the lesion area. A challenging dataset is used to evaluate our proposed method, and the results demonstrate that the proposed method consistently outperforms some current models trained with a different level of supervision, and is even as competitive as those relying on stronger supervision. To our best knowledge, we are the first to achieve CSC segmentation in SD-OCT images using weakly supervised learning, which can greatly reduce the labeling efforts.
Collapse
Affiliation(s)
- Ruiwen Xing
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| | - Sijie Niu
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| | - Xizhan Gao
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| | - Tingting Liu
- Shandong Eye Hospital, State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250014, Jinan 250014, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210094, China
| | - Yuehui Chen
- School of Information Science and Engineering, University of Jinan, Jinan 250022, China
- Shandong Provincial Key Laboratory of Network-based Intelligent Computing, Jinan 250022, China
| |
Collapse
|
19
|
Optical coherence tomography-based deep-learning model for detecting central serous chorioretinopathy. Sci Rep 2020; 10:18852. [PMID: 33139813 PMCID: PMC7608618 DOI: 10.1038/s41598-020-75816-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 10/07/2020] [Indexed: 01/13/2023] Open
Abstract
Central serous chorioretinopathy (CSC) is a common condition characterized by serous detachment of the neurosensory retina at the posterior pole. We built a deep learning system model to diagnose CSC, and distinguish chronic from acute CSC using spectral domain optical coherence tomography (SD-OCT) images. Data from SD-OCT images of patients with CSC and a control group were analyzed with a convolutional neural network. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were used to evaluate the model. For CSC diagnosis, our model showed an accuracy, sensitivity, and specificity of 93.8%, 90.0%, and 99.1%, respectively; AUROC was 98.9% (95% CI, 0.983–0.995); and its diagnostic performance was comparable with VGG-16, Resnet-50, and the diagnoses of five different ophthalmologists. For distinguishing chronic from acute cases, the accuracy, sensitivity, and specificity were 97.6%, 100.0%, and 92.6%, respectively; AUROC was 99.4% (95% CI, 0.985–1.000); performance was better than VGG-16 and Resnet-50, and was as good as the ophthalmologists. Our model performed well when diagnosing CSC and yielded highly accurate results when distinguishing between acute and chronic cases. Thus, automated deep learning system algorithms could play a role independent of human experts in the diagnosis of CSC.
Collapse
|
20
|
Moraes G, Fu DJ, Wilson M, Khalid H, Wagner SK, Korot E, Ferraz D, Faes L, Kelly CJ, Spitz T, Patel PJ, Balaskas K, Keenan TDL, Keane PA, Chopra R. Quantitative Analysis of OCT for Neovascular Age-Related Macular Degeneration Using Deep Learning. Ophthalmology 2020; 128:693-705. [PMID: 32980396 PMCID: PMC8528155 DOI: 10.1016/j.ophtha.2020.09.025] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 08/25/2020] [Accepted: 09/21/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To apply a deep learning algorithm for automated, objective, and comprehensive quantification of OCT scans to a large real-world dataset of eyes with neovascular age-related macular degeneration (AMD) and make the raw segmentation output data openly available for further research. DESIGN Retrospective analysis of OCT images from the Moorfields Eye Hospital AMD Database. PARTICIPANTS A total of 2473 first-treated eyes and 493 second-treated eyes that commenced therapy for neovascular AMD between June 2012 and June 2017. METHODS A deep learning algorithm was used to segment all baseline OCT scans. Volumes were calculated for segmented features such as neurosensory retina (NSR), drusen, intraretinal fluid (IRF), subretinal fluid (SRF), subretinal hyperreflective material (SHRM), retinal pigment epithelium (RPE), hyperreflective foci (HRF), fibrovascular pigment epithelium detachment (fvPED), and serous PED (sPED). Analyses included comparisons between first- and second-treated eyes by visual acuity (VA) and race/ethnicity and correlations between volumes. MAIN OUTCOME MEASURES Volumes of segmented features (mm3) and central subfield thickness (CST) (μm). RESULTS In first-treated eyes, the majority had both IRF and SRF (54.7%). First-treated eyes had greater volumes for all segmented tissues, with the exception of drusen, which was greater in second-treated eyes. In first-treated eyes, older age was associated with lower volumes for RPE, SRF, NSR, and sPED; in second-treated eyes, older age was associated with lower volumes of NSR, RPE, sPED, fvPED, and SRF. Eyes from Black individuals had higher SRF, RPE, and serous PED volumes compared with other ethnic groups. Greater volumes of the majority of features were associated with worse VA. CONCLUSIONS We report the results of large-scale automated quantification of a novel range of baseline features in neovascular AMD. Major differences between first- and second-treated eyes, with increasing age, and between ethnicities are highlighted. In the coming years, enhanced, automated OCT segmentation may assist personalization of real-world care and the detection of novel structure-function correlations. These data will be made publicly available for replication and future investigation by the AMD research community.
Collapse
Affiliation(s)
- Gabriella Moraes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Dun Jack Fu
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | | | - Hagar Khalid
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Edward Korot
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Daniel Ferraz
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom; Department of Ophthalmology, Federal University São Paulo, São Paulo, Brazil
| | - Livia Faes
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | | | | | - Praveen J Patel
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom.
| | - Reena Chopra
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom; Google Health, London, United Kingdom
| |
Collapse
|
21
|
Tan B, Sim R, Chua J, Wong DWK, Yao X, Garhöfer G, Schmidl D, Werkmeister RM, Schmetterer L. Approaches to quantify optical coherence tomography angiography metrics. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:1205. [PMID: 33241054 PMCID: PMC7576021 DOI: 10.21037/atm-20-3246] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 06/16/2020] [Indexed: 12/13/2022]
Abstract
Optical coherence tomography (OCT) has revolutionized the field of ophthalmology in the last three decades. As an OCT extension, OCT angiography (OCTA) utilizes a fast OCT system to detect motion contrast in ocular tissue and provides a three-dimensional representation of the ocular vasculature in a non-invasive, dye-free manner. The first OCT machine equipped with OCTA function was approved by U.S. Food and Drug Administration in 2016 and now it is widely applied in clinics. To date, numerous methods have been developed to aid OCTA interpretation and quantification. In this review, we focused on the workflow of OCTA-based interpretation, beginning from the generation of the OCTA images using signal decorrelation, which we divided into intensity-based, phase-based and phasor-based methods. We further discussed methods used to address image artifacts that are commonly observed in clinical settings, to the algorithms for image enhancement, binarization, and OCTA metrics extraction. We believe a better grasp of these technical aspects of OCTA will enhance the understanding of the technology and its potential application in disease diagnosis and management. Moreover, future studies will also explore the use of ocular OCTA as a window to link ocular vasculature to the function of other organs such as the kidney and brain.
Collapse
Affiliation(s)
- Bingyao Tan
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Ralene Sim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon W. K. Wong
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Xinwen Yao
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - Doreen Schmidl
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - René M. Werkmeister
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| |
Collapse
|
22
|
Diving Deep into Deep Learning: An Update on Artificial Intelligence in Retina. CURRENT OPHTHALMOLOGY REPORTS 2020; 8:121-128. [PMID: 33224635 DOI: 10.1007/s40135-020-00240-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Purpose of Review In the present article, we will provide an understanding and review of artificial intelligence in the subspecialty of retina and its potential applications within the specialty. Recent Findings Given the significant use of diagnostic imaging within retina, this subspecialty is a fitting area for the incorporation of artificial intelligence. Researchers have aimed at creating models to assist in the diagnosis and management of retinal disease as well as in the prediction of disease course and treatment response. Most of this work thus far has focused on diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity, although other retinal diseases have started to be explored as well. Summary Artificial intelligence is well-suited to transform the practice of ophthalmology. A basic understanding of the technology is important for its effective implementation and growth.
Collapse
|
23
|
Yanagihara RT, Lee CS, Ting DSW, Lee AY. Methodological Challenges of Deep Learning in Optical Coherence Tomography for Retinal Diseases: A Review. Transl Vis Sci Technol 2020; 9:11. [PMID: 32704417 PMCID: PMC7347025 DOI: 10.1167/tvst.9.2.11] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI)-based automated classification and segmentation of optical coherence tomography (OCT) features have become increasingly popular. However, its 3-dimensional volumetric nature has made developing an algorithm that generalizes across all patient populations and OCT devices challenging. Several recent studies have reported high diagnostic performances of AI models; however, significant methodological challenges still exist in applying these models in real-world clinical practice. Lack of large-image datasets from multiple OCT devices, nonstandardized imaging or post-processing protocols between devices, limited graphics processing unit capabilities for exploiting 3-dimensional features, and inconsistency in the reporting metrics are major hurdles in enabling AI for OCT analyses. We discuss these issues and present possible solutions.
Collapse
Affiliation(s)
- Ryan T Yanagihara
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore, Singapore.,Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA.,eScience Institute, University of Washington, Seattle, WA, USA
| |
Collapse
|