1
|
Liu X, Li X, Zhang Y, Wang M, Yao J, Tang J. Boundary-Repairing Dual-Path Network for Retinal Layer Segmentation in OCT Image with Pigment Epithelial Detachment. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01093-y. [PMID: 38740662 DOI: 10.1007/s10278-024-01093-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/05/2024] [Accepted: 03/06/2024] [Indexed: 05/16/2024]
Abstract
Automatic retinal layer segmentation in optical coherence tomography (OCT) images is crucial for the diagnosis of ocular diseases. Currently, automatic retinal layer segmentation works well with normal OCT images. However, pigment epithelial detachment (PED) dramatically alters the retinal structure, causing blurred boundaries and partial disappearance of the Bruch's Membrane (BM), thus posing challenges to the segmentation. To tackle these problems, we propose a novel dual-path U-shaped network for simultaneous layer segmentation and boundary regression. This network first designs a feature interaction fusion (FIF) module to strengthen the boundary shape constraints in the layer path. To address the challenge posed by partial BM disappearance and boundary-blurring, we propose a layer boundary repair (LBR) module. This module aims to use contrastive loss to enhance the confidence of blurred boundary regions and refine the segmentation of layer boundaries through the re-prediction head. In addition, we introduce a novel bilateral threshold distance map (BTDM) designed for the boundary path. The BTDM serves to emphasize information within boundary regions. This map, combined with the updated probability map, culminates in topology-guaranteed segmentation results achieved through a topology correction (TC) module. We investigated the proposed network on two severely deformed datasets (i.e., OCTA-500 and Aier-PED) and one slightly deformed dataset (i.e., DUKE). The proposed method achieves an average Dice score of 94.26% on the OCTA-500 dataset, which was 1.5% higher than BAU-Net and outperformed other methods. In the DUKE and Aier-PED datasets, the proposed method achieved average Dice scores of 91.65% and 95.75%, respectively.
Collapse
Affiliation(s)
- Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, China.
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, 430065, China.
| | - Xiao Li
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, 430065, China
| | - Ying Zhang
- Wuhan Aier Eye Hospital of Wuhan University, Wuhan, China
| | - Man Wang
- Wuhan Aier Eye Hospital of Wuhan University, Wuhan, China
| | - Junping Yao
- Department of Ophthalmology, Tianyou Hospital Affiliated to Wuhan University of Science and Technology, Wuhan, China
| | - Jinshan Tang
- Department of Health Administration and Policy, College of Health and Human Services, George Mason University, Fairfax, VA, 22030, USA
| |
Collapse
|
2
|
Shen Y, Li J, Zhu W, Yu K, Wang M, Peng Y, Zhou Y, Guan L, Chen X. Graph Attention U-Net for Retinal Layer Surface Detection and Choroid Neovascularization Segmentation in OCT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3140-3154. [PMID: 37022267 DOI: 10.1109/tmi.2023.3240757] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Choroidal neovascularization (CNV) is a typical symptom of age-related macular degeneration (AMD) and is one of the leading causes for blindness. Accurate segmentation of CNV and detection of retinal layers are critical for eye disease diagnosis and monitoring. In this paper, we propose a novel graph attention U-Net (GA-UNet) for retinal layer surface detection and CNV segmentation in optical coherence tomography (OCT) images. Due to retinal layer deformation caused by CNV, it is challenging for existing models to segment CNV and detect retinal layer surfaces with the correct topological order. We propose two novel modules to address the challenge. The first module is a graph attention encoder (GAE) in a U-Net model that automatically integrates topological and pathological knowledge of retinal layers into the U-Net structure to achieve effective feature embedding. The second module is a graph decorrelation module (GDM) that takes reconstructed features by the decoder of the U-Net as inputs, it then decorrelates and removes information unrelated to retinal layer for improved retinal layer surface detection. In addition, we propose a new loss function to maintain the correct topological order of retinal layers and the continuity of their boundaries. The proposed model learns graph attention maps automatically during training and performs retinal layer surface detection and CNV segmentation simultaneously with the attention maps during inference. We evaluated the proposed model on our private AMD dataset and another public dataset. Experiment results show that the proposed model outperformed the competing methods for retinal layer surface detection and CNV segmentation and achieved new state of the arts on the datasets.
Collapse
|
3
|
Xie H, Xu W, Wang YX, Wu X. Deep learning network with differentiable dynamic programming for retina OCT surface segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3190-3202. [PMID: 37497505 PMCID: PMC10368040 DOI: 10.1364/boe.492670] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 05/19/2023] [Accepted: 05/23/2023] [Indexed: 07/28/2023]
Abstract
Multiple-surface segmentation in optical coherence tomography (OCT) images is a challenging problem, further complicated by the frequent presence of weak image boundaries. Recently, many deep learning-based methods have been developed for this task and yield remarkable performance. Unfortunately, due to the scarcity of training data in medical imaging, it is challenging for deep learning networks to learn the global structure of the target surfaces, including surface smoothness. To bridge this gap, this study proposes to seamlessly unify a U-Net for feature learning with a constrained differentiable dynamic programming module to achieve end-to-end learning for retina OCT surface segmentation to explicitly enforce surface smoothness. It effectively utilizes the feedback from the downstream model optimization module to guide feature learning, yielding better enforcement of global structures of the target surfaces. Experiments on Duke AMD (age-related macular degeneration) and JHU MS (multiple sclerosis) OCT data sets for retinal layer segmentation demonstrated that the proposed method was able to achieve subvoxel accuracy on both datasets, with the mean absolute surface distance (MASD) errors of 1.88 ± 1.96μm and 2.75 ± 0.94μm, respectively, over all the segmented surfaces.
Collapse
Affiliation(s)
- Hui Xie
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Weiyu Xu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital University of Medical Science, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Xiaodong Wu
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
4
|
Xu C, Chen Z, Zhang X, Peng Y, Tan Z, Fan Y, Liao X, Chen H, Shen J, Chen X. Accurate C/D ratio estimation with elliptical fitting for OCT image based on joint segmentation and detection network. Comput Biol Med 2023; 160:106903. [PMID: 37146494 DOI: 10.1016/j.compbiomed.2023.106903] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 03/12/2023] [Accepted: 04/09/2023] [Indexed: 05/07/2023]
Abstract
Proper estimation of the cup-to-disc ratio (C/D ratio) plays a significant role in ophthalmic examinations, and it is urgent to improve the efficiency of C/D ratio automatic measurement. Therefore, we propose a new method for measuring the C/D ratio of OCTs in normal subjects. Firstly, the end-to-end deep convolution network is used to segment and detect the inner limiting membrane (ILM) and the two Bruch's membrane opening (BMO) terminations. Then, we introduce an ellipse fitting technique to post-process the edge of the optic disc. Finally, the proposed method is evaluated on 41 normal subjects using the optic-disc-area scanning mode of three machines: BV1000, Topcon 3D OCT-1, and Nidek ARK-1. In addition, pairwise correlation analyses are carried out to compare the C/D ratio measurement method of BV1000 to existing commercial OCT machines as well as other state-of-the-art methods. The correlation coefficient between the C/D ratio calculated by BV1000 and the C/D ratio calculated by manual annotation is 0.84, which indicates that the proposed method has a strong correlation with the results of manual annotation by ophthalmologists. Moreover, in comparison between BV1000, Topcon and Nidek in practical screening among normal subjects, the proportion of the C/D ratio less than 0.6 calculated by BV1000 accounts for 96.34%, which is the closest to the clinical statistics among the three OCT machines. The above experimental results and analysis show that the proposed method performs well in cup and disc detection and C/D ratio measurement, and compared with the existing commercial OCT equipment, the C/D ratio measurement results are relatively close to reality, which has certain clinical application value.
Collapse
Affiliation(s)
- Chenan Xu
- State Key Laboratory of Radiation Medicine and Protection, Collaborative Innovation Center of Radiological Medicine of Jiangsu Higher Education Institutions, and School for Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Suzhou, 215006, China
| | - Zhongyue Chen
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Xiao Zhang
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Yuanyuan Peng
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Zhiwei Tan
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Yu Fan
- Bigvision Medical Technology Co., Ltd., Suzhou, Jiangsu Province, 215006, China
| | - Xulong Liao
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong Province, 515041, China
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou, Guangdong Province, 515041, China
| | - Jiayan Shen
- School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China
| | - Xinjian Chen
- State Key Laboratory of Radiation Medicine and Protection, Collaborative Innovation Center of Radiological Medicine of Jiangsu Higher Education Institutions, and School for Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Suzhou, 215006, China; School of Electronics and Information Engineering and Medical Image Processing, Analysis and Visualization Lab, Soochow University, Suzhou, Jiangsu Province, 215006, China.
| |
Collapse
|
5
|
Yang J, Tao Y, Xu Q, Zhang Y, Ma X, Yuan S, Chen Q. Self-Supervised Sequence Recovery for Semi-Supervised Retinal Layer Segmentation. IEEE J Biomed Health Inform 2022; 26:3872-3883. [PMID: 35412994 DOI: 10.1109/jbhi.2022.3166778] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automated layer segmentation plays an important role for retinal disease diagnosis in optical coherence tomography (OCT) images. However, the severe retinal diseases result in the performance degeneration of automated layer segmentation approaches. In this paper, we present a robust semi-supervised retinal layer segmentation network to relieve the model failures on abnormal retinas, in which we obtain the lesion features from the labeled images with disease-balanced distribution, and utilize the unlabeled images to supplement the layer structure information. Specifically, in our proposed method, the cross-consistency training is utilized over the predictions of the different decoders, and we enforce a consistency between different decoder predictions to improve the encoders representation. Then, we proposed a sequence prediction branch based on self-supervised manner, which is designed to predict the position of each jigsaw puzzle to obtain sensory perception of the retinal layer structure. To this task, a layer spatial pyramid pooling (LSPP) module is designed to extract multi-scale layer spatial features. Furthermore, we use the optical coherence tomography angiography (OCTA) to supplement the information damaged by diseases. The experimental results validate that our method achieves more robust results compared with current supervised segmentation methods. Meanwhile, advanced segmentation performance can be obtained compared with state-of-the-art semi-supervised segmentation methods.
Collapse
|
6
|
Wang M, Zhu W, Shi F, Su J, Chen H, Yu K, Zhou Y, Peng Y, Chen Z, Chen X. MsTGANet: Automatic Drusen Segmentation From Retinal OCT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:394-406. [PMID: 34520349 DOI: 10.1109/tmi.2021.3112716] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Drusen is considered as the landmark for diagnosis of AMD and important risk factor for the development of AMD. Therefore, accurate segmentation of drusen in retinal OCT images is crucial for early diagnosis of AMD. However, drusen segmentation in retinal OCT images is still very challenging due to the large variations in size and shape of drusen, blurred boundaries, and speckle noise interference. Moreover, the lack of OCT dataset with pixel-level annotation is also a vital factor hindering the improvement of drusen segmentation accuracy. To solve these problems, a novel multi-scale transformer global attention network (MsTGANet) is proposed for drusen segmentation in retinal OCT images. In MsTGANet, which is based on U-Shape architecture, a novel multi-scale transformer non-local (MsTNL) module is designed and inserted into the top of encoder path, aiming at capturing multi-scale non-local features with long-range dependencies from different layers of encoder. Meanwhile, a novel multi-semantic global channel and spatial joint attention module (MsGCS) between encoder and decoder is proposed to guide the model to fuse different semantic features, thereby improving the model's ability to learn multi-semantic global contextual information. Furthermore, to alleviate the shortage of labeled data, we propose a novel semi-supervised version of MsTGANet (Semi-MsTGANet) based on pseudo-labeled data augmentation strategy, which can leverage a large amount of unlabeled data to further improve the segmentation performance. Finally, comprehensive experiments are conducted to evaluate the performance of the proposed MsTGANet and Semi-MsTGANet. The experimental results show that our proposed methods achieve better segmentation accuracy than other state-of-the-art CNN-based methods.
Collapse
|
7
|
Liu J, Yan S, Lu N, Yang D, Lv H, Wang S, Zhu X, Zhao Y, Wang Y, Ma Z, Yu Y. Automated retinal boundary segmentation of optical coherence tomography images using an improved Canny operator. Sci Rep 2022; 12:1412. [PMID: 35082355 PMCID: PMC8791938 DOI: 10.1038/s41598-022-05550-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 01/12/2022] [Indexed: 11/26/2022] Open
Abstract
Retinal segmentation is a prerequisite for quantifying retinal structural features and diagnosing related ophthalmic diseases. Canny operator is recognized as the best boundary detection operator so far, and is often used to obtain the initial boundary of the retina in retinal segmentation. However, the traditional Canny operator is susceptible to vascular shadows, vitreous artifacts, or noise interference in retinal segmentation, causing serious misdetection or missed detection. This paper proposed an improved Canny operator for automatic segmentation of retinal boundaries. The improved algorithm solves the problems of the traditional Canny operator by adding a multi-point boundary search step on the basis of the original method, and adjusts the convolution kernel. The algorithm was used to segment the retinal images of healthy subjects and age-related macular degeneration (AMD) patients; eleven retinal boundaries were identified and compared with the results of manual segmentation by the ophthalmologists. The average difference between the automatic and manual methods is: 2–6 microns (1–2 pixels) for healthy subjects and 3–10 microns (1–3 pixels) for AMD patients. Qualitative method is also used to verify the accuracy and stability of the algorithm. The percentage of “perfect segmentation” and “good segmentation” is 98% in healthy subjects and 94% in AMD patients. This algorithm can be used alone or in combination with other methods as an initial boundary detection algorithm. It is easy to understand and improve, and may become a useful tool for analyzing and diagnosing eye diseases.
Collapse
|
8
|
Xie H, Pan Z, Zhou L, Zaman FA, Chen DZ, Jonas JB, Xu W, Wang YX, Wu X. Globally optimal OCT surface segmentation using a constrained IPM optimization. OPTICS EXPRESS 2022; 30:2453-2471. [PMID: 35209385 DOI: 10.1364/oe.444369] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 01/01/2022] [Indexed: 06/14/2023]
Abstract
Segmentation of multiple surfaces in optical coherence tomography (OCT) images is a challenging problem, further complicated by the frequent presence of weak boundaries, varying layer thicknesses, and mutual influence between adjacent surfaces. The traditional graph-based optimal surface segmentation method has proven its effectiveness with its ability to capture various surface priors in a uniform graph model. However, its efficacy heavily relies on handcrafted features that are used to define the surface cost for the "goodness" of a surface. Recently, deep learning (DL) is emerging as a powerful tool for medical image segmentation thanks to its superior feature learning capability. Unfortunately, due to the scarcity of training data in medical imaging, it is nontrivial for DL networks to implicitly learn the global structure of the target surfaces, including surface interactions. This study proposes to parameterize the surface cost functions in the graph model and leverage DL to learn those parameters. The multiple optimal surfaces are then simultaneously detected by minimizing the total surface cost while explicitly enforcing the mutual surface interaction constraints. The optimization problem is solved by the primal-dual interior-point method (IPM), which can be implemented by a layer of neural networks, enabling efficient end-to-end training of the whole network. Experiments on spectral-domain optical coherence tomography (SD-OCT) retinal layer segmentation demonstrated promising segmentation results with sub-pixel accuracy.
Collapse
|
9
|
Automatic Segmentation of the Retinal Nerve Fiber Layer by Means of Mathematical Morphology and Deformable Models in 2D Optical Coherence Tomography Imaging. SENSORS 2021; 21:s21238027. [PMID: 34884031 PMCID: PMC8659929 DOI: 10.3390/s21238027] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/17/2021] [Accepted: 11/25/2021] [Indexed: 11/17/2022]
Abstract
Glaucoma is a neurodegenerative disease process that leads to progressive damage of the optic nerve to produce visual impairment and blindness. Spectral-domain OCT technology enables peripapillary circular scans of the retina and the measurement of the thickness of the retinal nerve fiber layer (RNFL) for the assessment of the disease status or progression in glaucoma patients. This paper describes a new approach to segment and measure the retinal nerve fiber layer in peripapillary OCT images. The proposed method consists of two stages. In the first one, morphological operators robustly detect the coarse location of the layer boundaries, despite the speckle noise and diverse artifacts in the OCT image. In the second stage, deformable models are initialized with the results of the previous stage to perform a fine segmentation of the boundaries, providing an accurate measurement of the entire RNFL. The results of the RNFL segmentation were qualitatively assessed by ophthalmologists, and the measurements of the thickness of the RNFL were quantitatively compared with those provided by the OCT inbuilt software as well as the state-of-the-art methods.
Collapse
|
10
|
CDC-Net: Cascaded decoupled convolutional network for lesion-assisted detection and grading of retinopathy using optical coherence tomography (OCT) scans. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103030] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
11
|
Wang B, Wei W, Qiu S, Wang S, Li D, He H. Boundary Aware U-Net for Retinal Layers Segmentation in Optical Coherence Tomography Images. IEEE J Biomed Health Inform 2021; 25:3029-3040. [PMID: 33729959 DOI: 10.1109/jbhi.2021.3066208] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Retinal layers segmentation in optical coherence tomography (OCT) images is a critical step in the diagnosis of numerous ocular diseases. Automatic layers segmentation requires separating each individual layer instance with accurate boundary detection, but remains a challenging task since it suffers from speckle noise, intensity inhomogeneity, and the low contrast around boundary. In this work, we proposed a boundary aware U-Net (BAU-Net) for retinal layers segmentation by detecting accurate boundary. Based on encoder-decoder architecture, we design a dual tasks framework with low-level outputs for boundary detection and high-level outputs for layers segmentation. Specifically, we first use the multi-scale input strategy to enrich the spatial information in the deep features of encoder. For low-level features from encoder, we design an edge aware (EA) module in skip connection to extract the pure edge features. Then, a U-structure feature enhanced (UFE) module is designed in all skip connections to enlarge the features receptive fields from the encoder. Besides, a canny edge fusion (CEF) module is introduced to aforementioned architecture, which can fuse the priory edge information from segmentation task to boundary detection branch for a better predication. Furthermore, we model each boundary as a vertical coordinates distribution for boundary detection. Based on this distribution, a topology guarantee loss with combined A-scan regression loss and structure loss is proposed to make an accurate and guaranteed topological boundary set. The method is evaluated on two public datasets and the results demonstrate that the BAU-Net achieves promising performance than other state-of-the-art methods.
Collapse
|
12
|
Hassan B, Qin S, Ahmed R, Hassan T, Taguri AH, Hashmi S, Werghi N. Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy. Comput Biol Med 2021; 136:104727. [PMID: 34385089 DOI: 10.1016/j.compbiomed.2021.104727] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 07/31/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability. METHOD The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics. RESULTS The proposed RFS-Net model achieved the mean F1 scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF. CONCLUSIONS Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.
Collapse
Affiliation(s)
- Bilal Hassan
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China.
| | - Shiyin Qin
- School of Automation Science and Electrical Engineering, Beihang University (BUAA), Beijing, 100191, China; School of Electrical Engineering and Intelligentization, Dongguan University of Technology, Dongguan, 523808, China
| | - Ramsha Ahmed
- School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing, 100083, China
| | - Taimur Hassan
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| | - Abdel Hakeem Taguri
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Shahrukh Hashmi
- Abu Dhabi Healthcare Company (SEHA), Abu Dhabi, 127788, United Arab Emirates
| | - Naoufel Werghi
- Center for Cyber-Physical Systems, Khalifa University of Science and Technology, Abu Dhabi, 127788, United Arab Emirates
| |
Collapse
|
13
|
Wang M, Zhu W, Yu K, Chen Z, Shi F, Zhou Y, Ma Y, Peng Y, Bao D, Feng S, Ye L, Xiang D, Chen X. Semi-Supervised Capsule cGAN for Speckle Noise Reduction in Retinal OCT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1168-1183. [PMID: 33395391 DOI: 10.1109/tmi.2020.3048975] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Speckle noise is the main cause of poor optical coherence tomography (OCT) image quality. Convolutional neural networks (CNNs) have shown remarkable performances for speckle noise reduction. However, speckle noise denoising still meets great challenges because the deep learning-based methods need a large amount of labeled data whose acquisition is time-consuming or expensive. Besides, many CNNs-based methods design complex structure based networks with lots of parameters to improve the denoising performance, which consume hardware resources severely and are prone to overfitting. To solve these problems, we propose a novel semi-supervised learning based method for speckle noise denoising in retinal OCT images. First, to improve the model's ability to capture complex and sparse features in OCT images, and avoid the problem of a great increase of parameters, a novel capsule conditional generative adversarial network (Caps-cGAN) with small number of parameters is proposed to construct the semi-supervised learning system. Then, to tackle the problem of retinal structure information loss in OCT images caused by lack of detailed guidance during unsupervised learning, a novel joint semi-supervised loss function composed of unsupervised loss and supervised loss is proposed to train the model. Compared with other state-of-the-art methods, the proposed semi-supervised method is suitable for retinal OCT images collected from different OCT devices and can achieve better performance even only using half of the training data.
Collapse
|
14
|
Hassan T, Akram MU, Werghi N, Nazir MN. RAG-FW: A Hybrid Convolutional Framework for the Automated Extraction of Retinal Lesions and Lesion-Influenced Grading of Human Retinal Pathology. IEEE J Biomed Health Inform 2021; 25:108-120. [PMID: 32224467 DOI: 10.1109/jbhi.2020.2982914] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The identification of retinal lesions plays a vital role in accurately classifying and grading retinopathy. Many researchers have presented studies on optical coherence tomography (OCT) based retinal image analysis over the past. However, to the best of our knowledge, there is no framework yet available that can extract retinal lesions from multi-vendor OCT scans and utilize them for the intuitive severity grading of the human retina. To cater this lack, we propose a deep retinal analysis and grading framework (RAG-FW). RAG-FW is a hybrid convolutional framework that extracts multiple retinal lesions from OCT scans and utilizes them for lesion-influenced grading of retinopathy as per the clinical standards. RAG-FW has been rigorously tested on 43,613 scans from five highly complex publicly available datasets, containing multi-vendor scans, where it achieved the mean intersection-over-union score of 0.8055 for extracting the retinal lesions and the accuracy of 98.70% for the correct severity grading of retinopathy.
Collapse
|
15
|
Optical coherence tomography-based deep-learning model for detecting central serous chorioretinopathy. Sci Rep 2020; 10:18852. [PMID: 33139813 PMCID: PMC7608618 DOI: 10.1038/s41598-020-75816-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 10/07/2020] [Indexed: 01/13/2023] Open
Abstract
Central serous chorioretinopathy (CSC) is a common condition characterized by serous detachment of the neurosensory retina at the posterior pole. We built a deep learning system model to diagnose CSC, and distinguish chronic from acute CSC using spectral domain optical coherence tomography (SD-OCT) images. Data from SD-OCT images of patients with CSC and a control group were analyzed with a convolutional neural network. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUROC) were used to evaluate the model. For CSC diagnosis, our model showed an accuracy, sensitivity, and specificity of 93.8%, 90.0%, and 99.1%, respectively; AUROC was 98.9% (95% CI, 0.983–0.995); and its diagnostic performance was comparable with VGG-16, Resnet-50, and the diagnoses of five different ophthalmologists. For distinguishing chronic from acute cases, the accuracy, sensitivity, and specificity were 97.6%, 100.0%, and 92.6%, respectively; AUROC was 99.4% (95% CI, 0.985–1.000); performance was better than VGG-16 and Resnet-50, and was as good as the ophthalmologists. Our model performed well when diagnosing CSC and yielded highly accurate results when distinguishing between acute and chronic cases. Thus, automated deep learning system algorithms could play a role independent of human experts in the diagnosis of CSC.
Collapse
|
16
|
Xu J, Yang W, Wan C, Shen J. Weakly supervised detection of central serous chorioretinopathy based on local binary patterns and discrete wavelet transform. Comput Biol Med 2020; 127:104056. [PMID: 33096297 DOI: 10.1016/j.compbiomed.2020.104056] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 10/10/2020] [Accepted: 10/10/2020] [Indexed: 10/23/2022]
Abstract
Central serous chorioretinopathy (CSCR) is a common fundus disease. Early detection of CSCR is of great importance to prevent visual loss. Therefore, a novel automatic detection method is presented in this paper which integrates technologies including discrete wavelet transform (DWT) image decomposition, local binary patterns (LBP) based texture feature extraction, and multi-instance learning (MIL). LBP is selected due to its robustness to low contrast and low quality images, which can reduce the interference of image itself on the detection method. DWT image decomposition provides high-frequency components with rich details for extracting LBP texture features, which can remove redundant information that is not necessary for diagnosis of CSCR in the raw image. The tedious task of accurately locating and segmenting CSCR lesions is avoided by using MIL. Experiments on 358 optical coherence tomography (OCT) B-scan images demonstrate the effectiveness of our method. Even under the condition of single threshold, the accuracy of 99.58% is obtained at K = 35 by only using a high-frequency feature fusion scheme, which is competitive with the existing methods. Additionally, through further detail innovation, such as multi-threshold optimization (MTO) and integrated decision-making (IDM), the performance of our method is further improved and the detection accuracy is 100% at K = 40.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics &Astronautics, 210016, Nanjing, PR China.
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics & Astronautics, 211106, Nanjing, PR China
| | - Jianxin Shen
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics &Astronautics, 210016, Nanjing, PR China.
| |
Collapse
|
17
|
Anoop B, Pavan R, Girish G, Kothari AR, Rajan J. Stack generalized deep ensemble learning for retinal layer segmentation in Optical Coherence Tomography images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.07.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
18
|
ASSESSMENT OF CENTRAL SEROUS CHORIORETINOPATHY DEPICTED ON COLOR FUNDUS PHOTOGRAPHS USING DEEP LEARNING. Retina 2020; 40:1558-1564. [DOI: 10.1097/iae.0000000000002621] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
19
|
Sun Y, Niu S, Gao X, Su J, Dong J, Chen Y, Wang L. Adaptive-Guided-Coupling-Probability Level Set for Retinal Layer Segmentation. IEEE J Biomed Health Inform 2020; 24:3236-3247. [PMID: 32191901 DOI: 10.1109/jbhi.2020.2981562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Quantitative assessment of retinal layer thickness in spectral domain-optical coherence tomography (SD-OCT) images is vital for clinicians to determine the degree of ophthalmic lesions. However, due to the complex retinal tissues, high-level speckle noises and low intensity constraint, how to accurately recognize the retinal layer structure still remains a challenge. To overcome this problem, this paper proposes an adaptive-guided-coupling-probability level set method for retinal layer segmentation in SD-OCT images. Specifically, based on Bayes's theorem, each voxel probability representation is composed of two probability terms in our method. The first term is constructed as neighborhood Gaussian fitting distribution to characterize intensity information for each intra-retinal layer. The second one is boundary probability map generated by combining anatomical priors and adaptive thickness information to ensure surfaces evolve within a proper range. Then, the voxel probability representation is introduced into the proposed segmentation framework based on coupling probability level set to detect layer boundaries. A total of 1792 retinal B-scan images from 4 SD-OCT cubes in healthy eyes, 5 cubes in abnormal eyes with central serous chorioretinaopathy and 5 SD-OCT cubes in abnormal eyes with age-related macular disease are used to evaluate the proposed method. The experiment demonstrates that the segmentation results obtained by the proposed method have a good consistency with ground truth, and the proposed method outperforms six methods in the layer segmentation of uneven retinal SD-OCT images.
Collapse
|
20
|
Xiang D, Tian H, Yang X, Shi F, Zhu W, Chen H, Chen X. Automatic Segmentation of Retinal Layer in OCT Images With Choroidal Neovascularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:5880-5891. [PMID: 30059302 DOI: 10.1109/tip.2018.2860255] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Age-related macular degeneration is one of the main causes of blindness. However, the internal structures of retinas are complex and difficult to be recognized due to the occurrence of neovascularization. Traditional surface detection methods may fail in the layer segmentation. In this paper, a supervised method is reported for simultaneously segmenting layers and neovascularization. Three spatial features, seven gray-level-based features, and 14 layer-like features are extracted for the neural network classifier. The coarse surfaces of different optical coherence tomography (OCT) images can thus be found. To describe and enhance retinal layers with different thicknesses and abnormalities, multi-scale bright and dark layer detection filters are introduced. A constrained graph search algorithm is also proposed to accurately detect retinal surfaces. The weights of nodes in the graph are computed based on these layer-like responses. The proposed method was evaluated on 42 spectral-domain OCT images with age-related macular degeneration. The experimental results show that the proposed method outperforms state-of-the-art methods.
Collapse
|