51
|
Eleiwa T, Elsawy A, Özcan E, Abou Shousha M. Automated diagnosis and staging of Fuchs' endothelial cell corneal dystrophy using deep learning. EYE AND VISION 2020; 7:44. [PMID: 32884962 PMCID: PMC7460770 DOI: 10.1186/s40662-020-00209-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 08/03/2020] [Indexed: 12/20/2022]
Abstract
Background To describe the diagnostic performance of a deep learning algorithm in discriminating early-stage Fuchs’ endothelial corneal dystrophy (FECD) without clinically evident corneal edema from healthy and late-stage FECD eyes using high-definition optical coherence tomography (HD-OCT). Methods In this observational case-control study, 104 eyes (53 FECD eyes and 51 healthy controls) received HD-OCT imaging (Envisu R2210, Bioptigen, Buffalo Grove, IL, USA) using a 6 mm radial scan pattern centered on the corneal vertex. FECD was clinically categorized into early (without corneal edema) and late-stage (with corneal edema). A total of 18,720 anterior segment optical coherence tomography (AS-OCT) images (9180 healthy; 5400 early-stage FECD; 4140 late-stage FECD) of 104 eyes (81 patients) were used to develop and validate a deep learning classification network to differentiate early-stage FECD eyes from healthy eyes and those with clinical edema. Using 5-fold cross-validation on the dataset containing 11,340 OCT images (63 eyes), the network was trained with 80% of these images (3420 healthy; 3060 early-stage FECD; 2700 late-stage FECD), then tested with 20% (720 healthy; 720 early-stage FECD; 720 late-stage FECD). Thereafter, a final model was trained with the entire dataset consisting the 11,340 images and validated with a remaining 7380 images of unseen AS-OCT scans of 41 eyes (5040 healthy; 1620 early-stage FECD 720 late-stage FECD). Visualization of learned features was done, and area under curve (AUC), specificity, and sensitivity of the prediction outputs for healthy, early and late-stage FECD were computed. Results The final model achieved an AUC of 0.997 ± 0.005 with 91% sensitivity and 97% specificity in detecting early-FECD; an AUC of 0.974 ± 0.005 with a specificity of 92% and a sensitivity up to 100% in detecting late-stage FECD; and an AUC of 0.998 ± 0.001 with a specificity 98% and a sensitivity of 99% in discriminating healthy corneas from all FECD. Conclusion Deep learning algorithm is an accurate autonomous novel diagnostic tool of FECD with very high sensitivity and specificity that can be used to grade FECD severity with high accuracy.
Collapse
Affiliation(s)
- Taher Eleiwa
- Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, Florida 33136 USA.,Department of Ophthalmology, Faculty of Medicine, Benha University, Benha, Egypt
| | - Amr Elsawy
- Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, Florida 33136 USA.,Electrical and Computer Engineering, University of Miami, Coral Gables, Florida USA
| | - Eyüp Özcan
- Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, Florida 33136 USA.,Net Eye Medical Center, Gaziantep, Turkey
| | - Mohamed Abou Shousha
- Bascom Palmer Eye Institute, Miller School of Medicine, University of Miami, Miami, Florida 33136 USA.,Electrical and Computer Engineering, University of Miami, Coral Gables, Florida USA.,Biomedical Engineering, University of Miami, Coral Gables, Florida USA
| |
Collapse
|
52
|
Butola A, Prasad DK, Ahmad A, Dubey V, Qaiser D, Srivastava A, Senthilkumaran P, Ahluwalia BS, Mehta DS. Deep learning architecture "LightOCT" for diagnostic decision support using optical coherence tomography images of biological samples. BIOMEDICAL OPTICS EXPRESS 2020; 11:5017-5031. [PMID: 33014597 PMCID: PMC7510870 DOI: 10.1364/boe.395487] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 06/16/2020] [Accepted: 07/06/2020] [Indexed: 05/06/2023]
Abstract
Optical coherence tomography (OCT) is being increasingly adopted as a label-free and non-invasive technique for biomedical applications such as cancer and ocular disease diagnosis. Diagnostic information for these tissues is manifest in textural and geometric features of the OCT images, which are used by human expertise to interpret and triage. However, it suffers delays due to the long process of the conventional diagnostic procedure and shortage of human expertise. Here, a custom deep learning architecture, LightOCT, is proposed for the classification of OCT images into diagnostically relevant classes. LightOCT is a convolutional neural network with only two convolutional layers and a fully connected layer, but it is shown to provide excellent training and test results for diverse OCT image datasets. We show that LightOCT provides 98.9% accuracy in classifying 44 normal and 44 malignant (invasive ductal carcinoma) breast tissue volumetric OCT images. Also, >96% accuracy in classifying public datasets of ocular OCT images as normal, age-related macular degeneration and diabetic macular edema. Additionally, we show ∼96% test accuracy for classifying retinal images as belonging to choroidal neovascularization, diabetic macular edema, drusen, and normal samples on a large public dataset of more than 100,000 images. The performance of the architecture is compared with transfer learning based deep neural networks. Through this, we show that LightOCT can provide significant diagnostic support for a variety of OCT images with sufficient training and minimal hyper-parameter tuning. The trained LightOCT networks for the three-classification problem will be released online to support transfer learning on other datasets.
Collapse
Affiliation(s)
- Ankit Butola
- Bio-photonics Laboratory, Department of Physics, Indian Institute of Technology Delhi, Hauz-Khas, New Delhi 110016, India
| | - Dilip K. Prasad
- School of Computer Science & Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, Norway
| | - Vishesh Dubey
- Bio-photonics Laboratory, Department of Physics, Indian Institute of Technology Delhi, Hauz-Khas, New Delhi 110016, India
| | - Darakhshan Qaiser
- Department of Surgical Disciplines, All India Institute of Medical Science, Ansari Nagar, New Delhi 110029, India
| | - Anurag Srivastava
- Department of Surgical Disciplines, All India Institute of Medical Science, Ansari Nagar, New Delhi 110029, India
| | | | | | - Dalip Singh Mehta
- Bio-photonics Laboratory, Department of Physics, Indian Institute of Technology Delhi, Hauz-Khas, New Delhi 110016, India
| |
Collapse
|
53
|
Automatic Segmentation of Macular Edema in Retinal OCT Images Using Improved U-Net++. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10165701] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The number and volume of retinal macular edemas are important indicators for screening and diagnosing retinopathy. Aiming at the problem that the segmentation method of macular edemas in a retinal optical coherence tomography (OCT) image is not ideal in segmentation of diverse edemas, this paper proposes a new method of automatic segmentation of macular edema regions in retinal OCT images using the improved U-Net++. The proposed method makes full use of the U-Net++ re-designed skip pathways and dense convolution block; reduces the semantic gap of the feature maps in the encoder/decoder sub-network; and adds the improved Resnet network as the backbone, which make the extraction of features in the edema regions more accurate and improves the segmentation effect. The proposed method was trained and validated on the public dataset of Duke University, and the experiments demonstrated the proposed method can not only improve the overall segmentation effect, but also can significantly improve the segmented precision for diverse edema in multi-regions, as well as reducing the error of the number of edema regions.
Collapse
|
54
|
Sorrentino FS, Jurman G, De Nadai K, Campa C, Furlanello C, Parmeggiani F. Application of Artificial Intelligence in Targeting Retinal Diseases. Curr Drug Targets 2020; 21:1208-1215. [PMID: 32640954 DOI: 10.2174/1389450121666200708120646] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 04/20/2020] [Accepted: 04/20/2020] [Indexed: 01/17/2023]
Abstract
Retinal diseases affect an increasing number of patients worldwide because of the aging population. Request for diagnostic imaging in ophthalmology is ramping up, while the number of specialists keeps shrinking. Cutting-edge technology embedding artificial intelligence (AI) algorithms are thus advocated to help ophthalmologists perform their clinical tasks as well as to provide a source for the advancement of novel biomarkers. In particular, optical coherence tomography (OCT) evaluation of the retina can be augmented by algorithms based on machine learning and deep learning to early detect, qualitatively localize and quantitatively measure epi/intra/subretinal abnormalities or pathological features of macular or neural diseases. In this paper, we discuss the use of AI to facilitate efficacy and accuracy of retinal imaging in those diseases increasingly treated by intravitreal vascular endothelial growth factor (VEGF) inhibitors (i.e. anti-VEGF drugs), also including integration and interpretation features in the process. We review recent advances by AI in diabetic retinopathy, age-related macular degeneration, and retinopathy of prematurity that envision a potentially key role of highly automated systems in screening, early diagnosis, grading and individualized therapy. We discuss benefits and critical aspects of automating the evaluation of disease activity, recurrences, the timing of retreatment and therapeutically potential novel targets in ophthalmology. The impact of massive employment of AI to optimize clinical assistance and encourage tailored therapies for distinct patterns of retinal diseases is also discussed.
Collapse
Affiliation(s)
| | - Giuseppe Jurman
- Unit of Predictive Models for Biomedicine and Environment - MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Katia De Nadai
- Department of Morphology, Surgery and Experimental Medicine, University of Ferrara, Ferrara, Italy
| | - Claudio Campa
- Department of Surgical Specialties, Sant'Anna Hospital, Azienda Ospedaliero Universitaria di Ferrara, Ferrara, Italy
| | - Cesare Furlanello
- Unit of Predictive Models for Biomedicine and Environment - MPBA, Fondazione Bruno Kessler, Trento, Italy
| | - Francesco Parmeggiani
- Department of Morphology, Surgery and Experimental Medicine, University of Ferrara, Ferrara, Italy
| |
Collapse
|
55
|
Ganjee R, Ebrahimi Moghaddam M, Nourinia R. An unsupervised hierarchical approach for automatic intra-retinal cyst segmentation in spectral-domain optical coherence tomography images. Med Phys 2020; 47:4872-4884. [PMID: 32609378 DOI: 10.1002/mp.14361] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 03/16/2020] [Accepted: 06/17/2020] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Intra-retinal cyst (IRC) is a symptom of macular disorders that occurs due to retinal blood vessel damage and fluid leakage to the macula area. These abnormalities are efficiently visualized using optical coherence tomography (OCT) imaging. These patients need to be regularly monitored for the presence and changes of IRC regions. Thus, automatic segmentation of IRCs can be beneficial to investigate disease progression. METHODS In this study, automatic IRC segmentation is accomplished by building three different masks in three unsupervised segmentation levels of a hierarchical framework. In the first level, the ROI-mask (R-mask) is built, and the retina area is cropped based on this mask. In the second level, the prune-mask (P-mask) is built, and the searching space is significantly reduced toward the target objects using this mask; and finally in the third level, by applying the Markov random field (MRF) model and employing intensity and contextual information, the cyst mask (C-mask) is extracted. RESULTS The proposed method is evaluated on three datasets including OPTIMA, UMN, and KERMANY datasets. The experimental results showed that the proposed method is effective with a mean dice coefficient rate of 0.74, 0.75 and 0.79 by the intersection of ground truths on the OPTIMA, UMN and KERMANY datasets, respectively. CONCLUSION The proposed method outperforms the state-of-the-art methods on the OPTIMA and UMN datasets while achieving comparable results to the most recently proposed method on the KERMANY dataset.
Collapse
Affiliation(s)
- Razieh Ganjee
- The Faculty of Computer Science and Engineering, Shahid Beheshti University G.C, Tehran, Iran
| | | | - Ramin Nourinia
- Ophthalmic Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
56
|
Wang C, Gan M, Zhang M, Li D. Adversarial convolutional network for esophageal tissue segmentation on OCT images. BIOMEDICAL OPTICS EXPRESS 2020; 11:3095-3110. [PMID: 32637244 PMCID: PMC7316031 DOI: 10.1364/boe.394715] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/08/2020] [Accepted: 05/08/2020] [Indexed: 05/20/2023]
Abstract
Automatic segmentation is important for esophageal OCT image processing, which is able to provide tissue characteristics such as shape and thickness for disease diagnosis. Existing automatical segmentation methods based on deep convolutional networks may not generate accurate segmentation results due to limited training set and various layer shapes. This study proposed a novel adversarial convolutional network (ACN) to segment esophageal OCT images using a convolutional network trained by adversarial learning. The proposed framework includes a generator and a discriminator, both with U-Net alike fully convolutional architecture. The discriminator is a hybrid network that discriminates whether the generated results are real and implements pixel classification at the same time. Leveraging on the adversarial training, the discriminator becomes more powerful. In addition, the adversarial loss is able to encode high order relationships of pixels, thus eliminating the requirements of post-processing. Experiments on segmenting esophageal OCT images from guinea pigs confirmed that the ACN outperforms several deep learning frameworks in pixel classification accuracy and improves the segmentation result. The potential clinical application of ACN for detecting eosinophilic esophagitis (EoE), an esophageal disease, is also presented in the experiment.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Miao Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Deyin Li
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| |
Collapse
|
57
|
Wang C, Gan M, Zhang M, Li D. Adversarial convolutional network for esophageal tissue segmentation on OCT images. BIOMEDICAL OPTICS EXPRESS 2020; 11:3095-3110. [PMID: 32637244 DOI: 10.1109/access.2020.3041767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/08/2020] [Accepted: 05/08/2020] [Indexed: 05/26/2023]
Abstract
Automatic segmentation is important for esophageal OCT image processing, which is able to provide tissue characteristics such as shape and thickness for disease diagnosis. Existing automatical segmentation methods based on deep convolutional networks may not generate accurate segmentation results due to limited training set and various layer shapes. This study proposed a novel adversarial convolutional network (ACN) to segment esophageal OCT images using a convolutional network trained by adversarial learning. The proposed framework includes a generator and a discriminator, both with U-Net alike fully convolutional architecture. The discriminator is a hybrid network that discriminates whether the generated results are real and implements pixel classification at the same time. Leveraging on the adversarial training, the discriminator becomes more powerful. In addition, the adversarial loss is able to encode high order relationships of pixels, thus eliminating the requirements of post-processing. Experiments on segmenting esophageal OCT images from guinea pigs confirmed that the ACN outperforms several deep learning frameworks in pixel classification accuracy and improves the segmentation result. The potential clinical application of ACN for detecting eosinophilic esophagitis (EoE), an esophageal disease, is also presented in the experiment.
Collapse
Affiliation(s)
- Cong Wang
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Meng Gan
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- These authors contributed equally to this work and should be considered co-first authors
| | - Miao Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Deyin Li
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| |
Collapse
|
58
|
Mojahed D, Ha RS, Chang P, Gan Y, Yao X, Angelini B, Hibshoosh H, Taback B, Hendon CP. Fully Automated Postlumpectomy Breast Margin Assessment Utilizing Convolutional Neural Network Based Optical Coherence Tomography Image Classification Method. Acad Radiol 2020; 27:e81-e86. [PMID: 31324579 DOI: 10.1016/j.acra.2019.06.018] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 06/21/2019] [Accepted: 06/24/2019] [Indexed: 12/21/2022]
Abstract
BACKGROUND The purpose of this study was to develop a deep learning classification approach to distinguish cancerous from noncancerous regions within optical coherence tomography (OCT) images of breast tissue for potential use in an intraoperative setting for margin assessment. METHODS A custom ultrahigh-resolution OCT (UHR-OCT) system with an axial resolution of 2.7 μm and a lateral resolution of 5.5 μm was used in this study. The algorithm used an A-scan-based classification scheme and the convolutional neural network (CNN) was implemented using an 11-layer architecture consisting of serial 3 × 3 convolution kernels. Four tissue types were classified, including adipose, stroma, ductal carcinoma in situ, and invasive ductal carcinoma. RESULTS The binary classification of cancer versus noncancer with the proposed CNN achieved 94% accuracy, 96% sensitivity, and 92% specificity. The mean five-fold validation F1 score was highest for invasive ductal carcinoma (mean standard deviation, 0.89 ± 0.09) and adipose (0.79 ± 0.17), followed by stroma (0.74 ± 0.18), and ductal carcinoma in situ (0.65 ± 0.15). CONCLUSION It is feasible to use CNN based algorithm to accurately distinguish cancerous regions in OCT images. This fully automated method can overcome limitations of manual interpretation including interobserver variability and speed of interpretation and may enable real-time intraoperative margin assessment.
Collapse
|
59
|
Tong Y, Lu W, Yu Y, Shen Y. Application of machine learning in ophthalmic imaging modalities. EYE AND VISION 2020; 7:22. [PMID: 32322599 PMCID: PMC7160952 DOI: 10.1186/s40662-020-00183-6] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 03/10/2020] [Indexed: 12/27/2022]
Abstract
In clinical ophthalmology, a variety of image-related diagnostic techniques have begun to offer unprecedented insights into eye diseases based on morphological datasets with millions of data points. Artificial intelligence (AI), inspired by the human multilayered neuronal system, has shown astonishing success within some visual and auditory recognition tasks. In these tasks, AI can analyze digital data in a comprehensive, rapid and non-invasive manner. Bioinformatics has become a focus particularly in the field of medical imaging, where it is driven by enhanced computing power and cloud storage, as well as utilization of novel algorithms and generation of data in massive quantities. Machine learning (ML) is an important branch in the field of AI. The overall potential of ML to automatically pinpoint, identify and grade pathological features in ocular diseases will empower ophthalmologists to provide high-quality diagnosis and facilitate personalized health care in the near future. This review offers perspectives on the origin, development, and applications of ML technology, particularly regarding its applications in ophthalmic imaging modalities.
Collapse
Affiliation(s)
- Yan Tong
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Wei Lu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yue Yu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yin Shen
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China.,2Medical Research Institute, Wuhan University, Wuhan, Hubei China
| |
Collapse
|
60
|
de Moura J, Vidal PL, Novo J, Rouco J, Penedo MG, Ortega M. Intraretinal Fluid Pattern Characterization in Optical Coherence Tomography Images. SENSORS 2020; 20:s20072004. [PMID: 32260062 PMCID: PMC7180444 DOI: 10.3390/s20072004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 03/27/2020] [Accepted: 03/31/2020] [Indexed: 12/20/2022]
Abstract
Optical Coherence Tomography (OCT) has become a relevant image modality in the ophthalmological clinical practice, as it offers a detailed representation of the eye fundus. This medical imaging modality is currently one of the main means of identification and characterization of intraretinal cystoid regions, a crucial task in the diagnosis of exudative macular disease or macular edema, among the main causes of blindness in developed countries. This work presents an exhaustive analysis of intensity and texture-based descriptors for its identification and classification, using a complete set of 510 texture features, three state-of-the-art feature selection strategies, and seven representative classifier strategies. The methodology validation and the analysis were performed using an image dataset of 83 OCT scans. From these images, 1609 samples were extracted from both cystoid and non-cystoid regions. The different tested configurations provided satisfactory results, reaching a mean cross-validation test accuracy of 92.69%. The most promising feature categories identified for the issue were the Gabor filters, the Histogram of Oriented Gradients (HOG), the Gray-Level Run-Length matrix (GLRL), and the Laws’ texture filters (LAWS), being consistently and considerably selected along all feature selector algorithms in the top positions of different relevance rankings.
Collapse
Affiliation(s)
- Joaquim de Moura
- Centro de investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain; (J.d.M.); (J.N.); (J.R.); (M.G.P.); (M.O.)
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Plácido L. Vidal
- Centro de investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain; (J.d.M.); (J.N.); (J.R.); (M.G.P.); (M.O.)
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
- Correspondence:
| | - Jorge Novo
- Centro de investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain; (J.d.M.); (J.N.); (J.R.); (M.G.P.); (M.O.)
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - José Rouco
- Centro de investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain; (J.d.M.); (J.N.); (J.R.); (M.G.P.); (M.O.)
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Manuel G. Penedo
- Centro de investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain; (J.d.M.); (J.N.); (J.R.); (M.G.P.); (M.O.)
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Marcos Ortega
- Centro de investigación CITIC, Universidade da Coruña, 15071 A Coruña, Spain; (J.d.M.); (J.N.); (J.R.); (M.G.P.); (M.O.)
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| |
Collapse
|
61
|
Wu C, Qiao Z, Zhang N, Li X, Fan J, Song H, Ai D, Yang J, Huang Y. Phase unwrapping based on a residual en-decoder network for phase images in Fourier domain Doppler optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:1760-1771. [PMID: 32341846 PMCID: PMC7173896 DOI: 10.1364/boe.386101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 02/19/2020] [Accepted: 02/27/2020] [Indexed: 06/01/2023]
Abstract
To solve the phase unwrapping problem for phase images in Fourier domain Doppler optical coherence tomography (DOCT), we propose a deep learning-based residual en-decoder network (REDN) method. In our approach, we reformulate the definition for obtaining the true phase as obtaining an integer multiple of 2π at each pixel by semantic segmentation. The proposed REDN architecture can provide recognition performance with pixel-level accuracy. To address the lack of phase images that are noise and wrapping free from DOCT systems for training, we used simulated images synthesized with DOCT phase image background noise features. An evaluation study on simulated images, DOCT phase images of phantom milk flowing in a plastic tube and a mouse artery, was performed. Meanwhile, a comparison study with recently proposed deep learning-based DeepLabV3+ and PhaseNet methods for signal phase unwrapping and traditional modified networking programming (MNP) method was also performed. Both visual inspection and quantitative metrical evaluation based on accuracy, specificity, sensitivity, root-mean-square-error, total-variation, and processing time demonstrate the robustness, effectiveness and superiority of our method. The proposed REDN method will benefit accurate and fast DOCT phase image-based diagnosis and evaluation when the detected phase is wrapped and will enrich the deep learning-based image processing platform for DOCT images.
Collapse
Affiliation(s)
- Chuanchao Wu
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Zhengyu Qiao
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Nan Zhang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Xiaochen Li
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Yong Huang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| |
Collapse
|
62
|
MDAN-UNet: Multi-Scale and Dual Attention Enhanced Nested U-Net Architecture for Segmentation of Optical Coherence Tomography Images. ALGORITHMS 2020. [DOI: 10.3390/a13030060] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Optical coherence tomography (OCT) is an optical high-resolution imaging technique for ophthalmic diagnosis. In this paper, we take advantages of multi-scale input, multi-scale side output and dual attention mechanism and present an enhanced nested U-Net architecture (MDAN-UNet), a new powerful fully convolutional network for automatic end-to-end segmentation of OCT images. We have evaluated two versions of MDAN-UNet (MDAN-UNet-16 and MDAN-UNet-32) on two publicly available benchmark datasets which are the Duke Diabetic Macular Edema (DME) dataset and the RETOUCH dataset, in comparison with other state-of-the-art segmentation methods. Our experiment demonstrates that MDAN-UNet-32 achieved the best performance, followed by MDAN-UNet-16 with smaller parameter, for multi-layer segmentation and multi-fluid segmentation respectively.
Collapse
|
63
|
An Y, Meng H, Gao Y, Tong T, Zhang C, Wang K, Tian J. Application of machine learning method in optical molecular imaging: a review. SCIENCE CHINA INFORMATION SCIENCES 2020; 63:111101. [DOI: 10.1007/s11432-019-2708-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/17/2019] [Accepted: 10/22/2019] [Indexed: 08/30/2023]
|
64
|
Beyond Performance Metrics: Automatic Deep Learning Retinal OCT Analysis Reproduces Clinical Trial Outcome. Ophthalmology 2019; 127:793-801. [PMID: 32019699 DOI: 10.1016/j.ophtha.2019.12.015] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 12/10/2019] [Accepted: 12/17/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE To validate the efficacy of a fully automatic, deep learning-based segmentation algorithm beyond conventional performance metrics by measuring the primary outcome of a clinical trial for macular telangiectasia type 2 (MacTel2). DESIGN Evaluation of diagnostic test or technology. PARTICIPANTS A total of 92 eyes from 62 participants with MacTel2 from a phase 2 clinical trial (NCT01949324) randomized to 1 of 2 treatment groups METHODS: The ellipsoid zone (EZ) defect areas were measured on spectral domain OCT images of each eye at 2 time points (baseline and month 24) by a fully automatic, deep learning-based segmentation algorithm. The change in EZ defect area from baseline to month 24 was calculated and analyzed according to the clinical trial protocol. MAIN OUTCOME MEASURE Difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups. RESULTS The difference in the change in EZ defect area from baseline to month 24 between the 2 treatment groups measured by the fully automatic segmentation algorithm was 0.072±0.035 mm2 (P = 0.021). This was comparable to the outcome of the clinical trial using semiautomatic measurements by expert readers, 0.065±0.033 mm2 (P = 0.025). CONCLUSIONS The fully automatic segmentation algorithm was as accurate as semiautomatic expert segmentation to assess EZ defect areas and was able to reliably reproduce the statistically significant primary outcome measure of the clinical trial. This approach, to validate the performance of an automatic segmentation algorithm on the primary clinical trial end point, provides a robust gauge of its clinical applicability.
Collapse
|
65
|
Wu M, Cai X, Chen Q, Ji Z, Niu S, Leng T, Rubin DL, Park H. Geographic atrophy segmentation in SD-OCT images using synthesized fundus autofluorescence imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:105101. [PMID: 31600644 DOI: 10.1016/j.cmpb.2019.105101] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Revised: 09/04/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate assessment of geographic atrophy (GA) is critical for diagnosis and therapy of non-exudative age-related macular degeneration (AMD). Herein, we propose a novel GA segmentation framework for spectral-domain optical coherence tomography (SD-OCT) images that employs synthesized fundus autofluorescence (FAF) images. METHODS An en-face OCT image is created via the restricted sub-volume projection of three-dimensional OCT data. A GA region-aware conditional generative adversarial network is employed to generate a plausible FAF image from the en-face OCT image. The network balances the consistency between the entire synthesize FAF image and the lesion. We use a fully convolutional deep network architecture to segment the GA region using the multimodal images, where the features of the en-face OCT and synthesized FAF images are fused on the front-end of the network. RESULTS Experimental results for 56 SD-OCT scans with GA indicate that our synthesis algorithm can generate high-quality synthesized FAF images and that the proposed segmentation network achieves a dice similarity coefficient, an overlap ratio, and an absolute area difference of 87.2%, 77.9%, and 11.0%, respectively. CONCLUSION We report an automatic GA segmentation method utilizing synthesized FAF images. SIGNIFICANCE Our method is effective for multimodal segmentation of the GA region and can improve AMD treatment.
Collapse
Affiliation(s)
- Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Xinxin Cai
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Sijie Niu
- School of Information Science and Engineering, University of Jinan, Jinan, China
| | - Theodore Leng
- Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, CA, USA
| | - Daniel L Rubin
- Department of Radiology and Medicine (Biomedical Informatics Research) and Ophthalmology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Hyunjin Park
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, South Korea; Center for Neuroscience Imaging Research, Institute of Basic Science, Suwon, South Korea.
| |
Collapse
|
66
|
Expert-level Automated Biomarker Identification in Optical Coherence Tomography Scans. Sci Rep 2019; 9:13605. [PMID: 31537854 PMCID: PMC6753124 DOI: 10.1038/s41598-019-49740-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 08/29/2019] [Indexed: 12/20/2022] Open
Abstract
In ophthalmology, retinal biological markers, or biomarkers, play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualize these, Optical Coherence Tomography (OCT) is often the tool of choice due to its ability to image retinal structures in three dimensions at micrometer resolution. But with widespread use in clinical routine, and growing prevalence in chronic retinal conditions, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research. We present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. Our approach avoids the need for costly segmentation annotations and allows scans to be characterized by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.
Collapse
|
67
|
Bogunovic H, Venhuizen F, Klimscha S, Apostolopoulos S, Bab-Hadiashar A, Bagci U, Beg MF, Bekalo L, Chen Q, Ciller C, Gopinath K, Gostar AK, Jeon K, Ji Z, Kang SH, Koozekanani DD, Lu D, Morley D, Parhi KK, Park HS, Rashno A, Sarunic M, Shaikh S, Sivaswamy J, Tennakoon R, Yadav S, De Zanet S, Waldstein SM, Gerendas BS, Klaver C, Sanchez CI, Schmidt-Erfurth U. RETOUCH: The Retinal OCT Fluid Detection and Segmentation Benchmark and Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1858-1874. [PMID: 30835214 DOI: 10.1109/tmi.2019.2901398] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Retinal swelling due to the accumulation of fluid is associated with the most vision-threatening retinal diseases. Optical coherence tomography (OCT) is the current standard of care in assessing the presence and quantity of retinal fluid and image-guided treatment management. Deep learning methods have made their impact across medical imaging, and many retinal OCT analysis methods have been proposed. However, it is currently not clear how successful they are in interpreting the retinal fluid on OCT, which is due to the lack of standardized benchmarks. To address this, we organized a challenge RETOUCH in conjunction with MICCAI 2017, with eight teams participating. The challenge consisted of two tasks: fluid detection and fluid segmentation. It featured for the first time: all three retinal fluid types, with annotated images provided by two clinical centers, which were acquired with the three most common OCT device vendors from patients with two different retinal diseases. The analysis revealed that in the detection task, the performance on the automated fluid detection was within the inter-grader variability. However, in the segmentation task, fusing the automated methods produced segmentations that were superior to all individual methods, indicating the need for further improvements in the segmentation performance.
Collapse
|
68
|
Narendra Rao TJ, Girish GN, Kothari AR, Rajan J. Deep Learning Based Sub-Retinal Fluid Segmentation in Central Serous Chorioretinopathy Optical Coherence Tomography Scans. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:978-981. [PMID: 31946057 DOI: 10.1109/embc.2019.8857105] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Development of an automated sub-retinal fluid segmentation technique from optical coherence tomography (OCT) scans is faced with challenges such as noise and motion artifacts present in OCT images, variation in size, shape and location of fluid pockets within the retina. The ability of a fully convolutional neural network to automatically learn significant low level features to differentiate subtle spatial variations makes it suitable for retinal fluid segmentation task. Hence, a fully convolutional neural network has been proposed in this work for the automatic segmentation of sub-retinal fluid in OCT scans of central serous chorioretinopathy (CSC) pathology. The proposed method has been evaluated on a dataset of 15 OCT volumes and an average Dice rate, Precision and Recall of 0.91, 0.93 and 0.89 respectively has been achieved over the test set.
Collapse
|
69
|
Guo Y, Hormel TT, Xiong H, Wang B, Camino A, Wang J, Huang D, Hwang TS, Jia Y. Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography. BIOMEDICAL OPTICS EXPRESS 2019; 10:3257-3268. [PMID: 31360599 PMCID: PMC6640834 DOI: 10.1364/boe.10.003257] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 05/24/2019] [Accepted: 05/28/2019] [Indexed: 05/06/2023]
Abstract
The capillary nonperfusion area (NPA) is a key quantifiable biomarker in the evaluation of diabetic retinopathy (DR) using optical coherence tomography angiography (OCTA). However, signal reduction artifacts caused by vitreous floaters, pupil vignetting, or defocus present significant obstacles to accurate quantification. We have developed a convolutional neural network, MEDnet-V2, to distinguish NPA from signal reduction artifacts in 6×6 mm2 OCTA. The network achieves strong specificity and sensitivity for NPA detection across a wide range of DR severity and scan quality.
Collapse
Affiliation(s)
- Yukun Guo
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- These authors contributed equally
| | - Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- These authors contributed equally
| | - Honglian Xiong
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, Guangdong 528000, China
| | - Bingjie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Acner Camino
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Jie Wang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Thomas S. Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, OR 97239, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
70
|
Gao K, Niu S, Ji Z, Wu M, Chen Q, Xu R, Yuan S, Fan W, Chen Y, Dong J. Double-branched and area-constraint fully convolutional networks for automated serous retinal detachment segmentation in SD-OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:69-80. [PMID: 31200913 DOI: 10.1016/j.cmpb.2019.04.027] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/17/2019] [Accepted: 04/23/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Quantitative assessment of subretinal fluid in spectral domain optical coherence tomography (SD-OCT) images is crucial for the diagnosis of central serous chorioretinopathy. For the subretinal fluid segmentation, the traditional methods need to segment retinal layers and then segment subretinal fluid. The layer segmentation has a high influence on subretinal fluid segmentation, so we aim to develop a deep learning model to segment subretinal fluid automatically without layer segmentation. METHODS In this paper, we propose a novel image-to-image double-branched and area-constraint fully convolutional networks (DA-FCN) for segmenting subretinal fluid in SD-OCT images. Firstly, the dataset is extended by mirroring image, which helps to overcome the over-fitting problem in the training stage. Then, double-branched structures are designed to learn the shallow coarse and deep representations from the SD-OCT images. DA-FCN model is directly trained using the image and corresponding pixel-based ground truth. Finally, we introduce a novel supervision mechanism by jointing the area loss LA with the softmax loss LS to learn more representative features. RESULTS The testing dataset with 52 SD-OCT volumes from 35 eyes of 35 patients is used for the evaluation of the proposed algorithm based on the cross-validation method. For the three criterions, including the true positive volume fraction, dice similarity coefficient, and positive predicative value, our method can obtain the results of (1) 94.3, 95.3, and 96.4 for dataset 1; (2) 97.3, 95.3, and 93.4 for dataset 2; (3) 93.0, 92.8, and 92.8 for dataset 3; (4) 89.7, 90.1, and 92.6 for dataset 4. CONCLUSION In this work, we propose a novel fully convolutional network for the automatic segmentation of the subretinal fluid. By constructing the double branched structures and area constraint term, our method shows higher segmentation accuracy without layer segmentation compared with other methods.
Collapse
Affiliation(s)
- Kun Gao
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| | - Sijie Niu
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China.
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing 210094, China
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Rongbin Xu
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210094, China
| | - Wen Fan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, Nanjing 210094, China
| | - Yuehui Chen
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| | - Jiwen Dong
- Shandong Provincial Key Laboratory of Network based Intelligent Computing, School of Information Science and Engineering, University of Jinan, Jinan 250022, China
| |
Collapse
|
71
|
Li MX, Yu SQ, Zhang W, Zhou H, Xu X, Qian TW, Wan YJ. Segmentation of retinal fluid based on deep learning: application of three-dimensional fully convolutional neural networks in optical coherence tomography images. Int J Ophthalmol 2019; 12:1012-1020. [PMID: 31236362 PMCID: PMC6580226 DOI: 10.18240/ijo.2019.06.22] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Accepted: 04/03/2019] [Indexed: 01/08/2023] Open
Abstract
AIM To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid. METHODS A two-dimensional (2D) fully convolutional network for retinal segmentation was employed. In order to solve the category imbalance in retinal optical coherence tomography (OCT) images, the network parameters and loss function based on the 2D fully convolutional network were modified. For this network, the correlations of corresponding positions among adjacent images in space are ignored. Thus, we proposed a three-dimensional (3D) fully convolutional network for segmentation in the retinal OCT images. RESULTS The algorithm was evaluated according to segmentation accuracy, Kappa coefficient, and F1 score. For the 3D fully convolutional network proposed in this paper, the overall segmentation accuracy rate is 99.56%, Kappa coefficient is 98.47%, and F1 score of retinal fluid is 95.50%. CONCLUSION The OCT image segmentation algorithm based on deep learning is primarily founded on the 2D convolutional network. The 3D network architecture proposed in this paper reduces the influence of category imbalance, realizes end-to-end segmentation of volume images, and achieves optimal segmentation results. The segmentation maps are practically the same as the manual annotations of doctors, and can provide doctors with more accurate diagnostic data.
Collapse
Affiliation(s)
- Meng-Xiao Li
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Su-Qin Yu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine; Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai 200080, China
| | - Wei Zhang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Hao Zhou
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine; Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai 200080, China
| | - Xun Xu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine; Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai 200080, China
| | - Tian-Wei Qian
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine; Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai 200080, China
| | - Yong-Jing Wan
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| |
Collapse
|
72
|
Keenan TD, Dharssi S, Peng Y, Chen Q, Agrón E, Wong WT, Lu Z, Chew EY. A Deep Learning Approach for Automated Detection of Geographic Atrophy from Color Fundus Photographs. Ophthalmology 2019; 126:1533-1540. [PMID: 31358385 DOI: 10.1016/j.ophtha.2019.06.005] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 06/01/2019] [Accepted: 06/05/2019] [Indexed: 01/22/2023] Open
Abstract
PURPOSE To assess the utility of deep learning in the detection of geographic atrophy (GA) from color fundus photographs and to explore potential utility in detecting central GA (CGA). DESIGN A deep learning model was developed to detect the presence of GA in color fundus photographs, and 2 additional models were developed to detect CGA in different scenarios. PARTICIPANTS A total of 59 812 color fundus photographs from longitudinal follow-up of 4582 participants in the Age-Related Eye Disease Study (AREDS) dataset. Gold standard labels were from human expert reading center graders using a standardized protocol. METHODS A deep learning model was trained to use color fundus photographs to predict GA presence from a population of eyes with no AMD to advanced AMD. A second model was trained to predict CGA presence from the same population. A third model was trained to predict CGA presence from the subset of eyes with GA. For training and testing, 5-fold cross-validation was used. For comparison with human clinician performance, model performance was compared with that of 88 retinal specialists. MAIN OUTCOME MEASURES Area under the curve (AUC), accuracy, sensitivity, specificity, and precision. RESULTS The deep learning models (GA detection, CGA detection from all eyes, and centrality detection from GA eyes) had AUCs of 0.933-0.976, 0.939-0.976, and 0.827-0.888, respectively. The GA detection model had accuracy, sensitivity, specificity, and precision of 0.965 (95% confidence interval [CI], 0.959-0.971), 0.692 (0.560-0.825), 0.978 (0.970-0.985), and 0.584 (0.491-0.676), respectively, compared with 0.975 (0.971-0.980), 0.588 (0.468-0.707), 0.982 (0.978-0.985), and 0.368 (0.230-0.505) for the retinal specialists. The CGA detection model had values of 0.966 (0.957-0.975), 0.763 (0.641-0.885), 0.971 (0.960-0.982), and 0.394 (0.341-0.448). The centrality detection model had values of 0.762 (0.725-0.799), 0.782 (0.618-0.945), 0.729 (0.543-0.916), and 0.799 (0.710-0.888). CONCLUSIONS A deep learning model demonstrated high accuracy for the automated detection of GA. The AUC was noninferior to that of human retinal specialists. Deep learning approaches may also be applied to the identification of CGA. The code and pretrained models are publicly available at https://github.com/ncbi-nlp/DeepSeeNet.
Collapse
Affiliation(s)
- Tiarnan D Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Shazia Dharssi
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Yifan Peng
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland
| | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Wai T Wong
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland; Unit on Microglia, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, Maryland.
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
73
|
Lu D, Heisler M, Lee S, Ding GW, Navajas E, Sarunic MV, Beg MF. Deep-learning based multiclass retinal fluid segmentation and detection in optical coherence tomography images using a fully convolutional neural network. Med Image Anal 2019; 54:100-110. [DOI: 10.1016/j.media.2019.02.011] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 02/15/2019] [Accepted: 02/15/2019] [Indexed: 11/28/2022]
|
74
|
Wang C, Gan M, Yang N, Yang T, Zhang M, Nao S, Zhu J, Ge H, Wang L. Fast esophageal layer segmentation in OCT images of guinea pigs based on sparse Bayesian classification and graph search. BIOMEDICAL OPTICS EXPRESS 2019; 10:978-994. [PMID: 30800527 PMCID: PMC6377884 DOI: 10.1364/boe.10.000978] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 01/11/2019] [Accepted: 01/11/2019] [Indexed: 05/02/2023]
Abstract
Endoscopic optical coherence tomography (OCT) devices are capable of generating high-resolution images of esophageal structures at high speed. To make the obtained data easy to interpret and reveal the clinical significance, an automatic segmentation algorithm is needed. This work proposes a fast algorithm combining sparse Bayesian learning and graph search (termed as SBGS) to automatically identify six layer boundaries on esophageal OCT images. The SBGS first extracts features, including multi-scale gradients, averages and Gabor wavelet coefficients, to train the sparse Bayesian classifier, which is used to generate probability maps indicating boundary positions. Given these probability maps, the graph search method is employed to create the final continuous smooth boundaries. The segmentation performance of the proposed SBGS algorithm was verified by esophageal OCT images from healthy guinea pigs and the eosinophilic esophagitis (EoE) models. Experiments confirmed that the SBGS method is able to implement robust esophageal segmentation for all the tested cases. In addition, benefiting from the sparse model of SBGS, the segmentation efficiency is significantly improved compared to other widely used techniques.
Collapse
Affiliation(s)
- Cong Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163,
China
| | - Meng Gan
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163,
China
| | - Na Yang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Ting Yang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Miao Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Sihan Nao
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Jing Zhu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Hongyu Ge
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| | - Lirong Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006,
China
| |
Collapse
|
75
|
dos Santos VA, Schmetterer L, Stegmann H, Pfister M, Messner A, Schmidinger G, Garhofer G, Werkmeister RM. CorneaNet: fast segmentation of cornea OCT scans of healthy and keratoconic eyes using deep learning. BIOMEDICAL OPTICS EXPRESS 2019; 10:622-641. [PMID: 30800504 PMCID: PMC6377876 DOI: 10.1364/boe.10.000622] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Revised: 11/29/2018] [Accepted: 12/07/2018] [Indexed: 05/08/2023]
Abstract
Deep learning has dramatically improved object recognition, speech recognition, medical image analysis and many other fields. Optical coherence tomography (OCT) has become a standard of care imaging modality for ophthalmology. We asked whether deep learning could be used to segment cornea OCT images. Using a custom-built ultrahigh-resolution OCT system, we scanned 72 healthy eyes and 70 keratoconic eyes. In total, 20,160 images were labeled and used for the training in a supervised learning approach. A custom neural network architecture called CorneaNet was designed and trained. Our results show that CorneaNet is able to segment both healthy and keratoconus images with high accuracy (validation accuracy: 99.56%). Thickness maps of the three main corneal layers (epithelium, Bowman's layer and stroma) were generated both in healthy subjects and subjects suffering from keratoconus. CorneaNet is more than 50 times faster than our previous algorithm. Our results show that deep learning algorithms can be used for OCT image segmentation and could be applied in various clinical settings. In particular, CorneaNet could be used for early detection of keratoconus and more generally to study other diseases altering corneal morphology.
Collapse
Affiliation(s)
| | - Leopold Schmetterer
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna,
Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna,
Austria
- Singapore Eye Research Institute, Singapore National Eye Centre,
Singapore
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University,
Singapore
| | - Hannes Stegmann
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna,
Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna,
Austria
| | - Martin Pfister
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna,
Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna,
Austria
- Institute of Applied Physics, Vienna University of Technology,
Austria
| | - Alina Messner
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna,
Austria
| | - Gerald Schmidinger
- Department of Ophthalmology and Optometry, Medical University of Vienna,
Austria
| | - Gerhard Garhofer
- Department of Clinical Pharmacology, Medical University of Vienna,
Austria
| | - René M. Werkmeister
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna,
Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna,
Austria
| |
Collapse
|
76
|
Ting DSW, Pasquale LR, Peng L, Campbell JP, Lee AY, Raman R, Tan GSW, Schmetterer L, Keane PA, Wong TY. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol 2019; 103:167-175. [PMID: 30361278 PMCID: PMC6362807 DOI: 10.1136/bjophthalmol-2018-313173] [Citation(s) in RCA: 571] [Impact Index Per Article: 114.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 09/17/2018] [Accepted: 09/23/2018] [Indexed: 12/18/2022]
Abstract
Artificial intelligence (AI) based on deep learning (DL) has sparked tremendous global interest in recent years. DL has been widely adopted in image recognition, speech recognition and natural language processing, but is only beginning to impact on healthcare. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography and visual fields, achieving robust classification performance in the detection of diabetic retinopathy and retinopathy of prematurity, the glaucoma-like disc, macular oedema and age-related macular degeneration. DL in ocular imaging may be used in conjunction with telemedicine as a possible solution to screen, diagnose and monitor major eye diseases for patients in primary care and community settings. Nonetheless, there are also potential challenges with DL application in ophthalmology, including clinical and technical challenges, explainability of the algorithm results, medicolegal issues, and physician and patient acceptance of the AI 'black-box' algorithms. DL could potentially revolutionise how ophthalmology is practised in the future. This review provides a summary of the state-of-the-art DL systems described for ophthalmic applications, potential challenges in clinical deployment and the path forward.
Collapse
Affiliation(s)
- Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Louis R Pasquale
- Department of Ophthalmology, Mt Sinai Hospital, New York City, New York, USA
| | - Lily Peng
- Google AI Healthcare, Mountain View, California, USA
| | - John Peter Campbell
- Casey Eye Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, School of Medicine, Seattle, Washington, USA
| | - Rajiv Raman
- Vitreo-retinal Department, Sankara Nethralaya, Chennai, Tamil Nadu, India
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Pearse A Keane
- Vitreo-retinal Service, Moorfields Eye Hospital, London, UK
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| |
Collapse
|
77
|
Chen Z, Peng P, Shen H, Wei H, Ouyang P, Duan X. Region-segmentation strategy for Bruch's membrane opening detection in spectral domain optical coherence tomography images. BIOMEDICAL OPTICS EXPRESS 2019; 10:526-538. [PMID: 30800497 PMCID: PMC6377878 DOI: 10.1364/boe.10.000526] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 12/12/2018] [Accepted: 12/17/2018] [Indexed: 06/09/2023]
Abstract
Bruch's membrane opening (BMO) is an important biomarker in the progression of glaucoma. Bruch's membrane opening minimum rim width (BMO-MRW), cup-to-disc ratio in spectral domain optical coherence tomography (SD-OCT) and lamina cribrosa depth based on BMO are important measurable parameters for glaucoma diagnosis. The accuracy of measuring these parameters is significantly affected by BMO detection. In this paper, we propose a method for automatically detecting BMO in SD-OCT volumes accurately to reduce the impact of the border tissue and vessel shadows. The method includes three stages: a coarse detection stage composed by retinal pigment epithelium layer segmentation, optic disc segmentation, and multi-modal registration; a fixed detection stage based on the U-net in which BMO detection is transformed into a region segmentation problem and an area bias component is proposed in the loss function; and a post-processing stage based on the consistency of results to remove outliers. Experimental results show that the proposed method outperforms previous methods and achieves a mean error of 42.38 μm.
Collapse
Affiliation(s)
- Zailiang Chen
- School of Information Science and Engineering, Central South University, Changsha 410083,
China
| | - Peng Peng
- School of Information Science and Engineering, Central South University, Changsha 410083,
China
| | - Hailan Shen
- School of Information Science and Engineering, Central South University, Changsha 410083,
China
| | - Hao Wei
- School of Information Science and Engineering, Central South University, Changsha 410083,
China
| | - Pingbo Ouyang
- The Second Xiangya Hospital of Central South University, Changsha 410011,
China
| | - Xuanchu Duan
- Changsha Aier Eye Hospital, Changsha 410015,
China
| |
Collapse
|
78
|
Abdolmanafi A, Duong L, Dahdah N, Adib IR, Cheriet F. Characterization of coronary artery pathological formations from OCT imaging using deep learning. BIOMEDICAL OPTICS EXPRESS 2018; 9:4936-4960. [PMID: 30319913 PMCID: PMC6179392 DOI: 10.1364/boe.9.004936] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 09/13/2018] [Accepted: 09/14/2018] [Indexed: 05/18/2023]
Abstract
Coronary artery disease is the number one health hazard leading to the pathological formations in coronary artery tissues. In severe cases, they can lead to myocardial infarction and sudden death. Optical Coherence Tomography (OCT) is an interferometric imaging modality, which has been recently used in cardiology to characterize coronary artery tissues providing high resolution ranging from 10 to 20 µm. In this study, we investigate different deep learning models for robust tissue characterization to learn the various intracoronary pathological formations caused by Kawasaki disease (KD) from OCT imaging. The experiments are performed on 33 retrospective cases comprising of pullbacks of intracoronary cross-sectional images obtained from different pediatric patients with KD. Our approach evaluates deep features computed from three different pre-trained convolutional networks. Then, a majority voting approach is applied to provide the final classification result. The results demonstrate high values of accuracy, sensitivity, and specificity for each tissue (up to 0.99 ± 0.01). Hence, deep learning models and especially, majority voting method are robust for automatic interpretation of the OCT images.
Collapse
Affiliation(s)
- Atefeh Abdolmanafi
- Dept. of Software and IT Engineering, École de technologie supérieure, Montréal,
Canada
| | - Luc Duong
- Dept. of Software and IT Engineering, École de technologie supérieure, Montréal,
Canada
| | - Nagib Dahdah
- Div. of Pediatric Cardiology and Research Center, Centre Hospitalier Universitaire Sainte-Justine, Montréal,
Canada
| | | | - Farida Cheriet
- Dept. of Computer Engineering, École Polytechnique de Montréal, Montréal,
Canada
| |
Collapse
|
79
|
Vidal PL, de Moura J, Novo J, Penedo MG, Ortega M. Intraretinal fluid identification via enhanced maps using optical coherence tomography images. BIOMEDICAL OPTICS EXPRESS 2018; 9:4730-4754. [PMID: 30319899 PMCID: PMC6179401 DOI: 10.1364/boe.9.004730] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Revised: 07/16/2018] [Accepted: 08/12/2018] [Indexed: 05/28/2023]
Abstract
Nowadays, among the main causes of blindness in developed countries are age-related macular degeneration (AMD) and the diabetic macular edema (DME). Both diseases present, as a common symptom, the appearance of cystoid fluid regions inside the retinal layers. Optical coherence tomography (OCT) image modality was one of the main medical imaging techniques for the early diagnosis and monitoring of AMD and DME via this intraretinal fluid detection and characterization. We present a novel methodology to identify these fluid accumulations by means of generating binary maps (offering a direct representation of these areas) and heat maps (containing the region confidence). To achieve this, a set of 312 intensity and texture-based features were studied. The most relevant features were selected using the sequential forward selection (SFS) strategy and tested with three archetypal classifiers: LDC, SVM and Parzen window. Finally, the most proficient classifier is used to create the proposed maps. All of the tested classifiers returned satisfactory results, the best classifier achieving a mean test accuracy higher than 94% in all of the experiments. The suitability of the maps was evaluated in a context of a screening issue with three different datasets obtained with two different devices, testing the capabilities of the system to work independently of the used OCT device. The experiments with the map creation were performed using 323 OCT images. Using only the binary maps, more than 91.33% of the images were correctly classified. With only the heat maps, the proposed methodology correctly separated 93.50% of the images.
Collapse
Affiliation(s)
- Plácido L. Vidal
- Department of Computer Science, University of A Coruña, 15071 A Coruña,
Spain
- CITIC-Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña,
Spain
| | - Joaquim de Moura
- Department of Computer Science, University of A Coruña, 15071 A Coruña,
Spain
- CITIC-Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña,
Spain
| | - Jorge Novo
- Department of Computer Science, University of A Coruña, 15071 A Coruña,
Spain
- CITIC-Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña,
Spain
| | - Manuel G. Penedo
- Department of Computer Science, University of A Coruña, 15071 A Coruña,
Spain
- CITIC-Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña,
Spain
| | - Marcos Ortega
- Department of Computer Science, University of A Coruña, 15071 A Coruña,
Spain
- CITIC-Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña,
Spain
| |
Collapse
|
80
|
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67:1-29. [PMID: 30076935 DOI: 10.1016/j.preteyeres.2018.07.004] [Citation(s) in RCA: 358] [Impact Index Per Article: 59.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/24/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023]
Abstract
Major advances in diagnostic technologies are offering unprecedented insight into the condition of the retina and beyond ocular disease. Digital images providing millions of morphological datasets can fast and non-invasively be analyzed in a comprehensive manner using artificial intelligence (AI). Methods based on machine learning (ML) and particularly deep learning (DL) are able to identify, localize and quantify pathological features in almost every macular and retinal disease. Convolutional neural networks thereby mimic the path of the human brain for object recognition through learning of pathological features from training sets, supervised ML, or even extrapolation from patterns recognized independently, unsupervised ML. The methods of AI-based retinal analyses are diverse and differ widely in their applicability, interpretability and reliability in different datasets and diseases. Fully automated AI-based systems have recently been approved for screening of diabetic retinopathy (DR). The overall potential of ML/DL includes screening, diagnostic grading as well as guidance of therapy with automated detection of disease activity, recurrences, quantification of therapeutic effects and identification of relevant targets for novel therapeutic approaches. Prediction and prognostic conclusions further expand the potential benefit of AI in retina which will enable personalized health care as well as large scale management and will empower the ophthalmologist to provide high quality diagnosis/therapy and successfully deal with the complexity of 21st century ophthalmology.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Bianca S Gerendas
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|
81
|
Zhang G, Guan T, Shen Z, Wang X, Hu T, Wang D, He Y, Xie N. Fast phase retrieval in off-axis digital holographic microscopy through deep learning. OPTICS EXPRESS 2018; 26:19388-19405. [PMID: 30114112 DOI: 10.1364/oe.26.019388] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 07/10/2018] [Indexed: 06/08/2023]
Abstract
Traditional digital holographic imaging algorithms need multiple iterations to obtain focused reconstructed image, which is time-consuming. In terms of phase retrieval, there is also the problem of phase compensation in addition to focusing task. Here, a new method is proposed for fast digital focus, where we use U-type convolutional neural network (U-net) to recover the original phase of microscopic samples. Generated data sets are used to simulate different degrees of defocused image, and verify that the U-net can restore the original phase to a great extent and realize phase compensation at the same time. We apply this method in the construction of real-time off-axis digital holographic microscope and obtain great breakthroughs in imaging speed.
Collapse
|