101
|
Fukutsu K, Saito M, Noda K, Murata M, Kase S, Shiba R, Isogai N, Asano Y, Hanawa N, Dohke M, Kase M, Ishida S. A Deep Learning Architecture for Vascular Area Measurement in Fundus Images. OPHTHALMOLOGY SCIENCE 2021; 1:100004. [PMID: 36246007 PMCID: PMC9560649 DOI: 10.1016/j.xops.2021.100004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/06/2021] [Accepted: 02/16/2021] [Indexed: 12/27/2022]
Abstract
Purpose To develop a novel evaluation system for retinal vessel alterations caused by hypertension using a deep learning algorithm. Design Retrospective study. Participants Fundus photographs (n = 10 571) of health-check participants (n = 5598). Methods The participants were analyzed using a fully automatic architecture assisted by a deep learning system, and the total area of retinal arterioles and venules was assessed separately. The retinal vessels were extracted automatically from each photograph and categorized as arterioles or venules. Subsequently, the total arteriolar area (AA) and total venular area (VA) were measured. The correlations among AA, VA, age, systolic blood pressure (SBP), and diastolic blood pressure were analyzed. Six ophthalmologists manually evaluated the arteriovenous ratio (AVR) in fundus images (n = 102), and the correlation between the SBP and AVR was evaluated manually. Main Outcome Measures Total arteriolar area and VA. Results The deep learning algorithm demonstrated favorable properties of vessel segmentation and arteriovenous classification, comparable with pre-existing techniques. Using the algorithm, a significant positive correlation was found between AA and VA. Both AA and VA demonstrated negative correlations with age and blood pressure. Furthermore, the SBP showed a higher negative correlation with AA measured by the algorithm than with AVR. Conclusions The current data demonstrated that the retinal vascular area measured with the deep learning system could be a novel index of hypertension-related vascular changes.
Collapse
Affiliation(s)
- Kanae Fukutsu
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Michiyuki Saito
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | - Kousuke Noda
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
- Correspondence: Kousuke Noda, MD, PhD, Department of Ophthalmology, Hokkaido University Graduate School of Medicine, N-15, W-7, Kita-ku, Sapporo 060-8638, Japan.
| | - Miyuki Murata
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| | - Satoru Kase
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
| | | | | | | | | | | | - Manabu Kase
- Department of Ophthalmology, Teine Keijinkai Hospital, Sapporo, Japan
| | - Susumu Ishida
- Department of Ophthalmology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, Japan
- Department of Ocular Circulation and Metabolism, Hokkaido University, Sapporo, Japan
| |
Collapse
|
102
|
Multi-Scale and Multi-Branch Convolutional Neural Network for Retinal Image Segmentation. Symmetry (Basel) 2021. [DOI: 10.3390/sym13030365] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The accurate segmentation of retinal images is a basic step in screening for retinopathy and glaucoma. Most existing retinal image segmentation methods have insufficient feature information extraction. They are susceptible to the impact of the lesion area and poor image quality, resulting in the poor recovery of contextual information. This also causes the segmentation results of the model to be noisy and low in accuracy. Therefore, this paper proposes a multi-scale and multi-branch convolutional neural network model (multi-scale and multi-branch network (MSMB-Net)) for retinal image segmentation. The model uses atrous convolution with different expansion rates and skip connections to reduce the loss of feature information. Receiving domains of different sizes captures global context information. The model fully integrates shallow and deep semantic information and retains rich spatial information. The network embeds an improved attention mechanism to obtain more detailed information, which can improve the accuracy of segmentation. Finally, the method of this paper was validated on the fundus vascular datasets, DRIVE, STARE and CHASE datasets, with accuracies/F1 of 0.9708/0.8320, 0.9753/0.8469 and 0.9767/0.8190, respectively. The effectiveness of the method in this paper was further validated on the optic disc visual cup DRISHTI-GS1 dataset with an accuracy/F1 of 0.9985/0.9770. Experimental results show that, compared with existing retinal image segmentation methods, our proposed method has good segmentation performance in all four benchmark tests.
Collapse
|
103
|
Efficient BFCN for Automatic Retinal Vessel Segmentation. J Ophthalmol 2021; 2020:6439407. [PMID: 33489334 PMCID: PMC7803293 DOI: 10.1155/2020/6439407] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 09/03/2020] [Accepted: 09/09/2020] [Indexed: 11/22/2022] Open
Abstract
Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively.
Collapse
|
104
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 97] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
105
|
Samuel PM, Veeramalai T. VSSC Net: Vessel Specific Skip chain Convolutional Network for blood vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105769. [PMID: 33039919 DOI: 10.1016/j.cmpb.2020.105769] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 09/18/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning techniques are instrumental in developing network models that aid in the early diagnosis of life-threatening diseases. To screen and diagnose the retinal fundus and coronary blood vessel disorders, the most important step is the proper segmentation of the blood vessels. METHODS This paper aims to segment the blood vessels from both the coronary angiogram and the retinal fundus images using a single VSSC Net after performing the image-specific preprocessing. The VSSC Net uses two-vessel extraction layers with added supervision on top of the base VGG-16 network. The vessel extraction layers comprise of the vessel-specific convolutional blocks to localize the blood vessels, skip chain convolutional layers to enable rich feature propagation, and a unique feature map summation. Supervision is associated with the two-vessel extraction layers using separate loss/sigmoid function. Finally, the weighted fusion of the individual loss/sigmoid function produces the desired blood vessel probability map. It is then binary segmented and validated for performance. RESULTS The VSSC Net shows improved accuracy values on the standard retinal and coronary angiogram datasets respectively. The computational time required to segment the blood vessels is 0.2 seconds using GPU. Moreover, the vessel extraction layer uses a lesser parameter count of 0.4 million parameters to accurately segment the blood vessels. CONCLUSION The proposed VSSC Net that segments blood vessels from both the retinal fundus images and coronary angiogram can be used for the early diagnosis of vessel disorders. Moreover, it could aid the physician to analyze the blood vessel structure of images obtained from multiple imaging sources.
Collapse
Affiliation(s)
- Pearl Mary Samuel
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India.
| | | |
Collapse
|
106
|
A Hybrid Unsupervised Approach for Retinal Vessel Segmentation. BIOMED RESEARCH INTERNATIONAL 2020; 2020:8365783. [PMID: 33381585 PMCID: PMC7749777 DOI: 10.1155/2020/8365783] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 11/26/2020] [Indexed: 12/04/2022]
Abstract
Retinal vessel segmentation (RVS) is a significant source of useful information for monitoring, identification, initial medication, and surgical development of ophthalmic disorders. Most common disorders, i.e., stroke, diabetic retinopathy (DR), and cardiac diseases, often change the normal structure of the retinal vascular network. A lot of research has been committed to building an automatic RVS system. But, it is still an open issue. In this article, a framework is recommended for RVS with fast execution and competing outcomes. An initial binary image is obtained by the application of the MISODATA on the preprocessed image. For vessel structure enhancement, B-COSFIRE filters are utilized along with thresholding to obtain another binary image. These two binary images are combined by logical AND-type operation. Then, it is fused with the enhanced image of B-COSFIRE filters followed by thresholding to obtain the vessel location map (VLM). The methodology is verified on four different datasets: DRIVE, STARE, HRF, and CHASE_DB1, which are publicly accessible for benchmarking and validation. The obtained results are compared with the existing competing methods.
Collapse
|
107
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
108
|
Retinal Vessel Segmentation by Deep Residual Learning with Wide Activation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8822407. [PMID: 33101403 PMCID: PMC7569427 DOI: 10.1155/2020/8822407] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 09/14/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022]
Abstract
Purpose Retinal blood vessel image segmentation is an important step in ophthalmological analysis. However, it is difficult to segment small vessels accurately because of low contrast and complex feature information of blood vessels. The objective of this study is to develop an improved retinal blood vessel segmentation structure (WA-Net) to overcome these challenges. Methods This paper mainly focuses on the width of deep learning. The channels of the ResNet block were broadened to propagate more low-level features, and the identity mapping pathway was slimmed to maintain parameter complexity. A residual atrous spatial pyramid module was used to capture the retinal vessels at various scales. We applied weight normalization to eliminate the impacts of the mini-batch and improve segmentation accuracy. The experiments were performed on the DRIVE and STARE datasets. To show the generalizability of WA-Net, we performed cross-training between datasets. Results The global accuracy and specificity within datasets were 95.66% and 96.45% and 98.13% and 98.71%, respectively. The accuracy and area under the curve of the interdataset diverged only by 1%∼2% compared with the performance of the corresponding intradataset. Conclusion All the results show that WA-Net extracts more detailed blood vessels and shows superior performance on retinal blood vessel segmentation tasks.
Collapse
|
109
|
Escorcia-Gutierrez J, Torrents-Barrena J, Gamarra M, Romero-Aroca P, Valls A, Puig D. Convexity shape constraints for retinal blood vessel segmentation and foveal avascular zone detection. Comput Biol Med 2020; 127:104049. [PMID: 33099218 DOI: 10.1016/j.compbiomed.2020.104049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 10/06/2020] [Accepted: 10/07/2020] [Indexed: 11/17/2022]
Abstract
Diabetic retinopathy (DR) has become a major worldwide health problem due to the increase in blindness among diabetics at early ages. The detection of DR pathologies such as microaneurysms, hemorrhages and exudates through advanced computational techniques is of utmost importance in patient health care. New computer vision techniques are needed to improve upon traditional screening of color fundus images. The segmentation of the entire anatomical structure of the retina is a crucial phase in detecting these pathologies. This work proposes a novel framework for fast and fully automatic blood vessel segmentation and fovea detection. The preprocessing method involved both contrast limited adaptive histogram equalization and the brightness preserving dynamic fuzzy histogram equalization algorithms to enhance image contrast and eliminate noise artifacts. Afterwards, the color spaces and their intrinsic components were examined to identify the most suitable color model to reveal the foreground pixels against the entire background. Several samples were then collected and used by the renowned convexity shape prior segmentation algorithm. The proposed methodology achieved an average vasculature segmentation accuracy exceeding 96%, 95%, 98% and 94% for the DRIVE, STARE, HRF and Messidor publicly available datasets, respectively. An additional validation step reached an average accuracy of 94.30% using an in-house dataset provided by the Hospital Sant Joan of Reus (Spain). Moreover, an outstanding detection accuracy of over 98% was achieved for the foveal avascular zone. An extensive state-of-the-art comparison was also conducted. The proposed approach can thus be integrated into daily clinical practice to assist medical experts in the diagnosis of DR.
Collapse
Affiliation(s)
- José Escorcia-Gutierrez
- Electronic and Telecommunications Program, Universidad Autónoma Del Caribe, Barranquilla, Colombia; Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Jordina Torrents-Barrena
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Margarita Gamarra
- Departament of Computational Science and Electronic, Universidad de La Costa, CUC, Barranquilla, Colombia
| | - Pedro Romero-Aroca
- Ophthalmology Service, Universitari Hospital Sant Joan, Institut de Investigacio Sanitaria Pere Virgili [IISPV], Reus, Spain
| | - Aida Valls
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| | - Domenec Puig
- Departament D'Enginyeria Informàtica I Matemàtiques, Escola Técnica Superior D'Enginyeria, Universitat Rovira I Virgili, Tarragona, Spain.
| |
Collapse
|
110
|
A deep-learning system for the assessment of cardiovascular disease risk via the measurement of retinal-vessel calibre. Nat Biomed Eng 2020; 5:498-508. [PMID: 33046867 DOI: 10.1038/s41551-020-00626-4] [Citation(s) in RCA: 120] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 09/08/2020] [Indexed: 12/16/2022]
Abstract
Retinal blood vessels provide information on the risk of cardiovascular disease (CVD). Here, we report the development and validation of deep-learning models for the automated measurement of retinal-vessel calibre in retinal photographs, using diverse multiethnic multicountry datasets that comprise more than 70,000 images. Retinal-vessel calibre measured by the models and by expert human graders showed high agreement, with overall intraclass correlation coefficients of between 0.82 and 0.95. The models performed comparably to or better than expert graders in associations between measurements of retinal-vessel calibre and CVD risk factors, including blood pressure, body-mass index, total cholesterol and glycated-haemoglobin levels. In retrospectively measured prospective datasets from a population-based study, baseline measurements performed by the deep-learning system were associated with incident CVD. Our findings motivate the development of clinically applicable explainable end-to-end deep-learning systems for the prediction of CVD on the basis of the features of retinal vessels in retinal photographs.
Collapse
|
111
|
Saroj SK, Kumar R, Singh NP. Fréchet PDF based Matched Filter Approach for Retinal Blood Vessels Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105490. [PMID: 32504830 DOI: 10.1016/j.cmpb.2020.105490] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 03/20/2020] [Accepted: 03/31/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal pathology diseases such as glaucoma, obesity, diabetes, hypertension etc. have deadliest impact on life of human being today. Retinal blood vessels consist of various significant information which are helpful in detection and treatment of these diseases. Therefore, it is essential to segment these retinal vessels. Various matched filter approaches for segmentation of retinal blood vessels are reported in the literature but their kernel templates are not appropriate to vessel profile resulting poor performance. To overcome this, a novel matched filter approach based on Fréchet probability distribution function has been proposed. METHODS Image processing operations which we have used in the proposed approach are basically divided into three major stages viz; pre processing, Fréchet matched filter and post processing. In pre processing, principle component analysis (PCA) method is used to convert color image into grayscale image thereafter contrast limited adaptive histogram equalization (CLAHE) is applied on obtained grayscale to get enhanced grayscale image. In Fréchet matched filter, exhaustive experimental tests are conducted to choose optimal values for both Fréchet function parameters and matched filter parameters to design new matched filter. In post processing, entropy based optimal thresholding technique is applied on obtained MFR image to get binary image followed by length filtering and masking methods are applied to generate to a clear and whole vascular tree. RESULTS For evaluation of the proposed approach, quantitative performance metrics such as average specificity, average sensitivity and average accuracy and root mean square deviation (RMSD) are computed in the literature. We found the average specificity 97.24%, average sensitivity 72.78%, average accuracy 95.09% for STARE dataset while average specificity 97.61%, average sensitivity 73.07%, average accuracy 95.44% for DRIVE dataset. Average RMSD values are found 0.07 and 0.04 for STARE and DRIVE databases respectively. CONCLUSIONS From experimental results, it can be observed that our proposed approach outperforms over latest and prominent works reported in the literature. The cause of improved performance is due to better matching between vessel profile and Fréchet template.
Collapse
Affiliation(s)
- Sushil Kumar Saroj
- Department of Computer Science and Engineering, MMM University of Technology, Gorakhpur, India.
| | - Rakesh Kumar
- Department of Computer Science and Engineering, MMM University of Technology, Gorakhpur, India.
| | - Nagendra Pratap Singh
- Department of Computer Science and Engineering, National Institute of Technology, Hamirpur, India.
| |
Collapse
|
112
|
Rapid vessel segmentation and reconstruction of head and neck angiograms using 3D convolutional neural network. Nat Commun 2020; 11:4829. [PMID: 32973154 PMCID: PMC7518426 DOI: 10.1038/s41467-020-18606-2] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 08/30/2020] [Indexed: 11/24/2022] Open
Abstract
The computed tomography angiography (CTA) postprocessing manually recognized by technologists is extremely labor intensive and error prone. We propose an artificial intelligence reconstruction system supported by an optimized physiological anatomical-based 3D convolutional neural network that can automatically achieve CTA reconstruction in healthcare services. This system is trained and tested with 18,766 head and neck CTA scans from 5 tertiary hospitals in China collected between June 2017 and November 2018. The overall reconstruction accuracy of the independent testing dataset is 0.931. It is clinically applicable due to its consistency with manually processed images, which achieves a qualification rate of 92.1%. This system reduces the time consumed from 14.22 ± 3.64 min to 4.94 ± 0.36 min, the number of clicks from 115.87 ± 25.9 to 4 and the labor force from 3 to 1 technologist after five months application. Thus, the system facilitates clinical workflows and provides an opportunity for clinical technologists to improve humanistic patient care. Manual postprocessing of computed tomography angiography (CTA) images is extremely labor intensive and error prone. Here, the authors propose an artificial intelligence reconstruction system that can automatically achieve CTA reconstruction in healthcare services.
Collapse
|
113
|
Ji L, Jiang X, Gao Y, Fang Z, Cai Q, Wei Z. ADR-Net: Context extraction network based on M-Net for medical image segmentation. Med Phys 2020; 47:4254-4264. [PMID: 32602963 DOI: 10.1002/mp.14364] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 06/21/2020] [Accepted: 06/22/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Medical image segmentation is an essential component of medical image analysis. Accurate segmentation can assist doctors in diagnosis and relieve their fatigue. Although several image segmentation methods based on U-Net have been proposed, their performances have been observed to be suboptimal in the case of small-sized objects. To address this shortcoming, a novel network architecture is proposed in this study to enhance segmentation performance on small medical targets. METHODS In this paper, we propose a joint multi-scale context attention network architecture to simultaneously capture higher level semantic information and spatial information. In order to obtain a greater number of feature maps during decoding, the network concatenates the images of side inputs by down-sampling during the encoding phase. In the bottleneck layer of the network, dense atrous convolution (DAC) and multi-scale residual pyramid pooling (RMP) modules are exploited to better capture high-level semantic information and spatial information. To improve the segmentation performance on small targets, the attention gate (AG) block is used to effectively suppress feature activation in uncorrelated regions and highlight the target area. RESULTS The proposed model is first evaluated on the public dataset DRIVE, on which it performs significantly better than the basic framework in terms of sensitivity (SE), intersection-over-union (IOU), and area under the receiver operating characteristic curve (AUC). In particular, the SE and IOU are observed to increase by 7.46% and 5.97%, respectively. Further, the evaluation indices exhibit improvements compared to those of state-of-the-art methods as well, with SE and IOU increasing by 3.58% and 3.26%, respectively. Additionally, in order to demonstrate the generalizability of the proposed architecture, we evaluate our model on three other challenging datasets. The respective performances are observed to be better than those of state-of-the-art network architectures on the same datasets. Moreover, we use lung segmentation as a comparative experiment to demonstrate the transferability of the advantageous properties of the proposed approach in the context of small target segmentation to the segmentation of large targets. Finally, an ablation study is conducted to investigate the individual contributions of the AG block, the DAC block, and the RMP block to the performance of the network. CONCLUSIONS The proposed method is evaluated on various datasets. Experimental results demonstrate that the proposed model performs better than state-of-the-art methods in medical image segmentation of small targets.
Collapse
Affiliation(s)
- Lingyu Ji
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Xiaoyan Jiang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Yongbin Gao
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Zhijun Fang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | | | - Ziran Wei
- Changzheng Hospital, Shanghai, 200003, China
| |
Collapse
|
114
|
Tian Y, Lan L, Guo H. A review on the wavelet methods for sonar image segmentation. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420936091] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The sonar image segmentation is needed such as in underwater object orientation and recognition, in collision prevention and navigation of underwater robots, in underwater investigation and rescue, in seafloor object seeking, in seafloor salvage, and in marine military affairs like torpedo detection. The wavelet-based methods have the ability of multiscale and multiresolution, and they are apt at edge detection and feature extraction of images. The applications of these methods to the sonar image segmentation are increasingly raised. The contents of the article are to classify the sonar image segmentation methods with wavelets and to describe main ideas, advantages, disadvantages, and conditions of use of every method. In the methods for sonar image region (or texture) segmentation, the thought of multiscale (or multiresolution) analysis of the wavelet transform is usually combined with other theories or methods such as the clustering algorithms, the Markov random field, co-occurrence matrix, Bayesian theory, and support vector machine. In the methods for sonar image edge detection, the space–frequency local characteristics of the wavelet transform are usually utilized. The wavelet packet-based and beyond wavelet-based methods can usually reach more precise segmentation. The article also gives 12 directions (or development trends predicted) of the sonar image segmentation methods with wavelets which should be researched deeply in the future. The aim of writing this review is to make the researchers engaged in sonar image segmentation learn about the research works in the field in a short time. Up to now, the similar reviews in this field have not been found.
Collapse
Affiliation(s)
- Yuanyuan Tian
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Luyu Lan
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Haitao Guo
- College of Marine Information Engineering, Hainan Tropical Ocean University, Sanya, China
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot, China
| |
Collapse
|
115
|
Satpute N, Gómez-Luna J, Olivares J. Accelerating Chan-Vese model with cross-modality guided contrast enhancement for liver segmentation. Comput Biol Med 2020; 124:103930. [PMID: 32745773 DOI: 10.1016/j.compbiomed.2020.103930] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 07/22/2020] [Accepted: 07/22/2020] [Indexed: 11/18/2022]
Abstract
Accurate and fast liver segmentation remains a challenging and important task for clinicians. Segmentation algorithms are slow and inaccurate due to noise and low quality images in computed tomography (CT) abdominal scans. Chan-Vese is an active contour based powerful and flexible method for image segmentation due to superior noise robustness. However, it is quite slow due to time-consuming partial differential equations, especially for large medical datasets. This can pose a problem for a real-time implementation of liver segmentation and hence, an efficient parallel implementation is highly desirable. Another important aspect is the contrast of CT liver images. Liver slices are sometimes very low in contrast which reduces the overall quality of liver segmentation. Hence, we implement cross-modality guided liver contrast enhancement as a pre-processing step to liver segmentation. GPU implementation of Chan-Vese improves average speedup by 99.811 (± 7.65) times and 14.647 (± 1.155) times with and without enhancement respectively in comparison with the CPU. Average dice, sensitivity and accuracy of liver segmentation are 0.656, 0.816 and 0.822 respectively on the original liver images and 0.877, 0.964 and 0.956 respectively on the enhanced liver images improving the overall quality of liver segmentation.
Collapse
Affiliation(s)
- Nitin Satpute
- Department of Electronic and Computer Engineering, Universidad de Córdoba, Spain.
| | | | - Joaquín Olivares
- Department of Electronic and Computer Engineering, Universidad de Córdoba, Spain
| |
Collapse
|
116
|
Zhou Y, Yen GG, Yi Z. Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:2916-2929. [PMID: 31536016 DOI: 10.1109/tnnls.2019.2933879] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Biomedical image segmentation is lately dominated by deep neural networks (DNNs) due to their surpassing expert-level performance. However, the existing DNN models for biomedical image segmentation are generally highly parameterized, which severely impede their deployment on real-time platforms and portable devices. To tackle this difficulty, we propose an evolutionary compression method (ECDNN) to automatically discover efficient DNN architectures for biomedical image segmentation. Different from the existing studies, ECDNN can optimize network loss and number of parameters simultaneously during the evolution, and search for a set of Pareto-optimal solutions in a single run, which is useful for quantifying the tradeoff in satisfying different objectives, and flexible for compressing DNN when preference information is uncertain. In particular, a set of novel genetic operators is proposed for automatically identifying less important filters over the whole network. Moreover, a pruning operator is designed for eliminating convolutional filters from layers involved in feature map concatenation, which is commonly adopted in DNN architectures for capturing multi-level features from biomedical images. Experiments carried out on compressing DNN for retinal vessel and neuronal membrane segmentation tasks show that ECDNN can not only improve the performance without any retraining but also discover efficient network architectures that well maintain the performance. The superiority of the proposed method is further validated by comparison with the state-of-the-art methods.
Collapse
|
117
|
Hybrid deep learning convolutional neural networks and optimal nonlinear support vector machine to detect presence of hemorrhage in retina. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101978] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
118
|
Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry (Basel) 2020. [DOI: 10.3390/sym12071067] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Blood vessel segmentation methods based on deep neural networks have achieved satisfactory results. However, these methods are usually supervised learning methods, which require large numbers of retinal images with high quality pixel-level ground-truth labels. In practice, the task of labeling these retinal images is very costly, financially and in human effort. To deal with these problems, we propose a semi-supervised learning method which can be used in blood vessel segmentation with limited labeled data. In this method, we use the improved U-Net deep learning network to segment the blood vessel tree. On this basis, we implement the U-Net network-based training dataset updating strategy. A large number of experiments are presented to analyze the segmentation performance of the proposed semi-supervised learning method. The experiment results demonstrate that the proposed methodology is able to avoid the problems of insufficient hand-labels, and achieve satisfactory performance.
Collapse
|
119
|
Abstract
Diabetes can induce diseases including diabetic retinopathy, cataracts, glaucoma, etc. The blindness caused by these diseases is irreversible. Early analysis of retinal fundus images, including optic disc and optic cup detection and retinal blood vessel segmentation, can effectively identify these diseases. The existing methods lack sufficient discrimination power for the fundus image and are easily affected by pathological regions. This paper proposes a novel multi-path recurrent U-Net architecture to achieve the segmentation of retinal fundus images. The effectiveness of the proposed network structure was proved by two segmentation tasks: optic disc and optic cup segmentation and retinal vessel segmentation. Our method achieved state-of-the-art results in the segmentation of the Drishti-GS1 dataset. Regarding optic disc segmentation, the accuracy and Dice values reached 0.9967 and 0.9817, respectively; as regards optic cup segmentation, the accuracy and Dice values reached 0.9950 and 0.8921, respectively. Our proposed method was also verified on the retinal blood vessel segmentation dataset DRIVE and achieved a good accuracy rate.
Collapse
|
120
|
Ding L, Bawany MH, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. A Novel Deep Learning Pipeline for Retinal Vessel Detection In Fluorescein Angiography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:10.1109/TIP.2020.2991530. [PMID: 32396087 PMCID: PMC7648732 DOI: 10.1109/tip.2020.2991530] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
While recent advances in deep learning have significantly advanced the state of the art for vessel detection in color fundus (CF) images, the success for detecting vessels in fluorescein angiography (FA) has been stymied due to the lack of labeled ground truth datasets. We propose a novel pipeline to detect retinal vessels in FA images using deep neural networks (DNNs) that reduces the effort required for generating labeled ground truth data by combining two key components: cross-modality transfer and human-in-the-loop learning. The cross-modality transfer exploits concurrently captured CF and fundus FA images. Binary vessels maps are first detected from CF images with a pre-trained neural network and then are geometrically registered with and transferred to FA images via robust parametric chamfer alignment to a preliminary FA vessel detection obtained with an unsupervised technique. Using the transferred vessels as initial ground truth labels for deep learning, the human-in-the-loop approach progressively improves the quality of the ground truth labeling by iterating between deep-learning and labeling. The approach significantly reduces manual labeling effort while increasing engagement. We highlight several important considerations for the proposed methodology and validate the performance on three datasets. Experimental results demonstrate that the proposed pipeline significantly reduces the annotation effort and the resulting deep learning methods outperform prior existing FA vessel detection methods by a significant margin. A new public dataset, RECOVERY-FA19, is introduced that includes high-resolution ultra-widefield images and accurately labeled ground truth binary vessel maps.
Collapse
|
121
|
Islam MM, Poly TN, Walther BA, Yang HC, Li YC(J. Artificial Intelligence in Ophthalmology: A Meta-Analysis of Deep Learning Models for Retinal Vessels Segmentation. J Clin Med 2020; 9:E1018. [PMID: 32260311 PMCID: PMC7231106 DOI: 10.3390/jcm9041018] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 03/27/2020] [Accepted: 03/28/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Accurate retinal vessel segmentation is often considered to be a reliable biomarker of diagnosis and screening of various diseases, including cardiovascular diseases, diabetic, and ophthalmologic diseases. Recently, deep learning (DL) algorithms have demonstrated high performance in segmenting retinal images that may enable fast and lifesaving diagnoses. To our knowledge, there is no systematic review of the current work in this research area. Therefore, we performed a systematic review with a meta-analysis of relevant studies to quantify the performance of the DL algorithms in retinal vessel segmentation. METHODS A systematic search on EMBASE, PubMed, Google Scholar, Scopus, and Web of Science was conducted for studies that were published between 1 January 2000 and 15 January 2020. We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) procedure. The DL-based study design was mandatory for a study's inclusion. Two authors independently screened all titles and abstracts against predefined inclusion and exclusion criteria. We used the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool for assessing the risk of bias and applicability. RESULTS Thirty-one studies were included in the systematic review; however, only 23 studies met the inclusion criteria for the meta-analysis. DL showed high performance for four publicly available databases, achieving an average area under the ROC of 0.96, 0.97, 0.96, and 0.94 on the DRIVE, STARE, CHASE_DB1, and HRF databases, respectively. The pooled sensitivity for the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.77, 0.79, 0.78, and 0.81, respectively. Moreover, the pooled specificity of the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.97, 0.97, 0.97, and 0.92, respectively. CONCLUSION The findings of our study showed the DL algorithms had high sensitivity and specificity for segmenting the retinal vessels from digital fundus images. The future role of DL algorithms in retinal vessel segmentation is promising, especially for those countries with limited access to healthcare. More compressive studies and global efforts are mandatory for evaluating the cost-effectiveness of DL-based tools for retinal disease screening worldwide.
Collapse
Affiliation(s)
- Md. Mohaimenul Islam
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Bruno Andreas Walther
- Department of Biological Sciences, National Sun Yat-Sen University, Gushan District, Kaohsiung City 804, Taiwan;
| | - Hsuan Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Yu-Chuan (Jack) Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
- Department of Dermatology, Wan Fang Hospital, Taipei 110, Taiwan
- TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei 110, Taiwan
| |
Collapse
|
122
|
Satpute N, Naseem R, Pelanis E, Gómez-Luna J, Cheikh FA, Elle OJ, Olivares J. GPU acceleration of liver enhancement for tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 184:105285. [PMID: 31896055 DOI: 10.1016/j.cmpb.2019.105285] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/27/2019] [Accepted: 12/16/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image segmentation plays a vital role in medical image analysis. There are many algorithms developed for medical image segmentation which are based on edge or region characteristics. These are dependent on the quality of the image. The contrast of a CT or MRI image plays an important role in identifying region of interest i.e. lesion(s). In order to enhance the contrast of image, clinicians generally use manual histogram adjustment technique which is based on 1D histogram specification. This is time consuming and results in poor distribution of pixels over the image. Cross modality based contrast enhancement is 2D histogram specification technique. This is robust and provides a more uniform distribution of pixels over CT image by exploiting the inner structure information from MRI image. This helps in increasing the sensitivity and accuracy of lesion segmentation from enhanced CT image. The sequential implementation of cross modality based contrast enhancement is slow. Hence we propose GPU acceleration of cross modality based contrast enhancement for tumor segmentation. METHODS The aim of this study is fast parallel cross modality based contrast enhancement for CT liver images. This includes pairwise 2D histogram, histogram equalization and histogram matching. The sequential implementation of the cross modality based contrast enhancement is computationally expensive and hence time consuming. We propose persistence and grid-stride loop based fast parallel contrast enhancement for CT liver images. We use enhanced CT liver image for the lesion or tumor segmentation. We implement the fast parallel gradient based dynamic seeded region growing for lesion segmentation. RESULTS The proposed parallel approach is 104.416 ( ± 5.166) times faster compared to the sequential implementation and increases the sensitivity and specificity of tumor segmentation. CONCLUSION The cross modality approach is inspired by 2D histogram specification which incorporates spatial information existing in both guidance and input images for remapping the input image intensity values. The cross modality based liver contrast enhancement improves the quality of tumor segmentation.
Collapse
Affiliation(s)
- Nitin Satpute
- Department of Electronic and Computer Engineering, Universidad de Córdoba, Spain.
| | - Rabia Naseem
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Norway
| | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway; The Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | | | - Faouzi Alaya Cheikh
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway; The Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Joaquín Olivares
- Department of Electronic and Computer Engineering, Universidad de Córdoba, Spain
| |
Collapse
|
123
|
|
124
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
125
|
Arsalan M, Owais M, Mahmood T, Cho SW, Park KR. Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-Based Semantic Segmentation. J Clin Med 2019; 8:E1446. [PMID: 31514466 PMCID: PMC6780110 DOI: 10.3390/jcm8091446] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/04/2019] [Accepted: 09/07/2019] [Indexed: 12/13/2022] Open
Abstract
Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.
Collapse
Affiliation(s)
- Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Se Woon Cho
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| |
Collapse
|
126
|
Automatic Retinal Blood Vessel Segmentation Based on Fully Convolutional Neural Networks. Symmetry (Basel) 2019. [DOI: 10.3390/sym11091112] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Automated retinal vessel segmentation technology has become an important tool for disease screening and diagnosis in clinical medicine. However, most of the available methods of retinal vessel segmentation still have problems such as poor accuracy and low generalization ability. This is because the symmetrical and asymmetrical patterns between blood vessels are complicated, and the contrast between the vessel and the background is relatively low due to illumination and pathology. Robust vessel segmentation of the retinal image is essential for improving the diagnosis of diseases such as vein occlusions and diabetic retinopathy. Automated retinal vein segmentation remains a challenging task. In this paper, we proposed an automatic retinal vessel segmentation framework using deep fully convolutional neural networks (FCN), which integrate novel methods of data preprocessing, data augmentation, and full convolutional neural networks. It is an end-to-end framework that automatically and efficiently performs retinal vessel segmentation. The framework was evaluated on three publicly available standard datasets, achieving F1 score of 0.8321, 0.8531, and 0.8243, an average accuracy of 0.9706, 0.9777, and 0.9773, and average area under the Receiver Operating Characteristic (ROC) curve of 0.9880, 0.9923 and 0.9917 on the DRIVE, STARE, and CHASE_DB1 datasets, respectively. The experimental results show that our proposed framework achieves state-of-the-art vessel segmentation performance in all three benchmark tests.
Collapse
|
127
|
Multilevel and Multiscale Deep Neural Network for Retinal Blood Vessel Segmentation. Symmetry (Basel) 2019. [DOI: 10.3390/sym11070946] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Retinal blood vessel segmentation influences a lot of blood vessel-related disorders such as diabetic retinopathy, hypertension, cardiovascular and cerebrovascular disorders, etc. It is found that vessel segmentation using a convolutional neural network (CNN) showed increased accuracy in feature extraction and vessel segmentation compared to the classical segmentation algorithms. CNN does not need any artificial handcrafted features to train the network. In the proposed deep neural network (DNN), a better pre-processing technique and multilevel/multiscale deep supervision (DS) layers are being incorporated for proper segmentation of retinal blood vessels. From the first four layers of the VGG-16 model, multilevel/multiscale deep supervision layers are formed by convolving vessel-specific Gaussian convolutions with two different scale initializations. These layers output the activation maps that are capable to learn vessel-specific features at multiple scales, levels, and depth. Furthermore, the receptive field of these maps is increased to obtain the symmetric feature maps that provide the refined blood vessel probability map. This map is completely free from the optic disc, boundaries, and non-vessel background. The segmented results are tested on Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), High-Resolution Fundus (HRF), and real-world retinal datasets to evaluate its performance. This proposed model achieves better sensitivity values of 0.8282, 0.8979 and 0.8655 in DRIVE, STARE and HRF datasets with acceptable specificity and accuracy performance metrics.
Collapse
|
128
|
Tmenova O, Martin R, Duong L. CycleGAN for style transfer in X-ray angiography. Int J Comput Assist Radiol Surg 2019; 14:1785-1794. [PMID: 31286396 DOI: 10.1007/s11548-019-02022-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 06/25/2019] [Indexed: 11/30/2022]
Abstract
PURPOSE We aim to perform generation of angiograms for various vascular structures as a mean of data augmentation in learning tasks. The task is to enhance the realism of vessels images generated from an anatomically realistic cardiorespiratory simulator to make them look like real angiographies. METHODS The enhancement is performed by applying the CycleGAN deep network for transferring the style of real angiograms acquired during percutaneous interventions into a data set composed of realistically simulated arteries. RESULTS The cycle consistency was evaluated by comparing an input simulated image with the one obtained after two cycles of image translation. An average structural similarity (SSIM) of 0.948 on our data sets has been obtained. The vessel preservation was measured by comparing segmentations of an input image and its corresponding enhanced image using Dice coefficient. CONCLUSIONS We proposed an application of the CycleGAN deep network for enhancing the artificial data as an alternative to classical data augmentation techniques for medical applications, particularly focused on angiogram generation. We discussed success and failure cases, explaining conditions for the realistic data augmentation which respects both the complex physiology of arteries and the various patterns and textures generated by X-ray angiography.
Collapse
Affiliation(s)
- Oleksandra Tmenova
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada. .,Taras Shevchenko National University of Kyiv, Volodymyrska St, 60, Kyiv, Ukraine.
| | - Rémi Martin
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada
| | - Luc Duong
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada
| |
Collapse
|
129
|
Guo S, Wang K, Kang H, Zhang Y, Gao Y, Li T. BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation. Int J Med Inform 2019; 126:105-113. [PMID: 31029251 DOI: 10.1016/j.ijmedinf.2019.03.015] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 01/31/2019] [Accepted: 03/25/2019] [Indexed: 11/26/2022]
Abstract
BACKGROUND AND OBJECTIVE The condition of vessel of the human eye is an important factor for the diagnosis of ophthalmological diseases. Vessel segmentation in fundus images is a challenging task due to complex vessel structure, the presence of similar structures such as microaneurysms and hemorrhages, micro-vessel with only one to several pixels wide, and requirements for finer results. METHODS In this paper, we present a multi-scale deeply supervised network with short connections (BTS-DSN) for vessel segmentation. We used short connections to transfer semantic information between side-output layers. Bottom-top short connections pass low level semantic information to high level for refining results in high-level side-outputs, and top-bottom short connection passes much structural information to low level for reducing noises in low-level side-outputs. In addition, we employ cross-training to show that our model is suitable for real world fundus images. RESULTS The proposed BTS-DSN has been verified on DRIVE, STARE and CHASE_DB1 datasets, and showed competitive performance over other state-of-the-art methods. Specially, with patch level input, the network achieved 0.7891/0.8212 sensitivity, 0.9804/0.9843 specificity, 0.9806/0.9859 AUC, and 0.8249/0.8421 F1-score on DRIVE and STARE, respectively. Moreover, our model behaves better than other methods in cross-training experiments. CONCLUSIONS BTS-DSN achieves competitive performance in vessel segmentation task on three public datasets. It is suitable for vessel segmentation. The source code of our method is available at: https://github.com/guomugong/BTS-DSN.
Collapse
Affiliation(s)
- Song Guo
- Nankai University, Tianjin, China
| | - Kai Wang
- Nankai University, Tianjin, China; KLMDASR, Tianjin, China
| | - Hong Kang
- Nankai University, Tianjin, China; Beijing Shanggong Medical Technology Co. Ltd, China
| | - Yujun Zhang
- Institute of Computing Technology, Chinese Academy, China
| | | | - Tao Li
- Nankai University, Tianjin, China.
| |
Collapse
|