1
|
Developing a Novel Methodology by Integrating Deep Learning and HMM for Segmentation of Retinal Blood Vessels in Fundus Images. Interdiscip Sci 2023; 15:273-292. [PMID: 36611082 DOI: 10.1007/s12539-022-00545-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 12/06/2022] [Accepted: 12/08/2022] [Indexed: 01/09/2023]
Abstract
Accurate segregation of retinal blood vessels network plays a crucial role in clinical assessments, treatments, and rehabilitation process. Owing to the presence of acquisition and instrumentation anomalies, precise tracking of vessels network is challenging. For this, a new fundus image segmentation framework is proposed by combining deep neural networks, and hidden Markov model. It has three main modules: the Atrous spatial pyramid pooling-based encoder, the decoder, and hidden Markov model vessel tracker. The encoder utilized modified ResNet18 deep neural networks model for low-and-high-levels features extraction. These features are concatenated in module-II by the decoder to perform convolution operations to obtain the initial segmentation. Previous modules detected the main vessel structure and overlooked some small capillaries. For improved segmentation, hidden Markov model vessel tracker is integrated with module-I and-II to detect overlooked small capillaries of the vessels network. In last module, final segmentation is obtained by combining multi-oriented sub-images using logical OR operation. This novel framework is validated experimentally using two standard DRIVE and STARE datasets. The developed model offers high average values of accuracy, area under the curve, and sensitivity of 99.8, 99.0, and 98.2%, respectively. Analysis of the results revealed that the developed approach offered enhanced performance in terms of sensitivity 18%, accuracy 3%, and specificity 1% over the state-of-the-art approaches. Owing to better learning and generalization capability, the developed approach tracked blood vessels network efficiently and automatically compared to other approaches. The proposed approach can be helpful for human eye assessment, disease diagnosis, and rehabilitation process.
Collapse
|
2
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
3
|
Huang J, Lin Z, Chen Y, Zhang X, Zhao W, Zhang J, Li Y, He X, Zhan M, Lu L, Jiang X, Peng Y. DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel. PeerJ Comput Sci 2022; 8:e871. [PMID: 35494791 PMCID: PMC9044242 DOI: 10.7717/peerj-cs.871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 01/10/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Many fundus imaging modalities measure ocular changes. Automatic retinal vessel segmentation (RVS) is a significant fundus image-based method for the diagnosis of ophthalmologic diseases. However, precise vessel segmentation is a challenging task when detecting micro-changes in fundus images, e.g., tiny vessels, vessel edges, vessel lesions and optic disc edges. METHODS In this paper, we will introduce a novel double branch fusion U-Net model that allows one of the branches to be trained by a weighting scheme that emphasizes harder examples to improve the overall segmentation performance. A new mask, we call a hard example mask, is needed for those examples that include a weighting strategy that is different from other methods. The method we propose extracts the hard example mask by morphology, meaning that the hard example mask does not need any rough segmentation model. To alleviate overfitting, we propose a random channel attention mechanism that is better than the drop-out method or the L2-regularization method in RVS. RESULTS We have verified the proposed approach on the DRIVE, STARE and CHASE datasets to quantify the performance metrics. Compared to other existing approaches, using those dataset platforms, the proposed approach has competitive performance metrics. (DRIVE: F1-Score = 0.8289, G-Mean = 0.8995, AUC = 0.9811; STARE: F1-Score = 0.8501, G-Mean = 0.9198, AUC = 0.9892; CHASE: F1-Score = 0.8375, G-Mean = 0.9138, AUC = 0.9879). DISCUSSION The segmentation results showed that DBFU-Net with RCA achieves competitive performance in three RVS datasets. Additionally, the proposed morphological-based extraction method for hard examples can reduce the computational cost. Finally, the random channel attention mechanism proposed in this paper has proven to be more effective than other regularization methods in the RVS task.
Collapse
Affiliation(s)
- Jianping Huang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Zefang Lin
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Yingyin Chen
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Xiao Zhang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Wei Zhao
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Jie Zhang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of Nuclear Medicine, Zhuhai, China
| | - Yong Li
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Xu He
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Meixiao Zhan
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Ligong Lu
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China
| | - Xiaofei Jiang
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of cardiology, Zhuhai, China
| | - Yongjun Peng
- Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of Nuclear Medicine, Zhuhai, China
| |
Collapse
|
4
|
Gegundez-Arias ME, Marin-Santos D, Perez-Borrero I, Vasallo-Vazquez MJ. A new deep learning method for blood vessel segmentation in retinal images based on convolutional kernels and modified U-Net model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106081. [PMID: 33882418 DOI: 10.1016/j.cmpb.2021.106081] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic monitoring of retinal blood vessels proves very useful for the clinical assessment of ocular vascular anomalies or retinopathies. This paper presents an efficient and accurate deep learning-based method for vessel segmentation in eye fundus images. METHODS The approach consists of a convolutional neural network based on a simplified version of the U-Net architecture that combines residual blocks and batch normalization in the up- and downscaling phases. The network receives patches extracted from the original image as input and is trained with a novel loss function that considers the distance of each pixel to the vascular tree. At its output, it generates the probability of each pixel of the input patch belonging to the vascular structure. The application of the network to the patches in which a retinal image can be divided allows obtaining the pixel-wise probability map of the complete image. This probability map is then binarized with a certain threshold to generate the blood vessel segmentation provided by the method. RESULTS The method has been developed and evaluated in the DRIVE, STARE and CHASE_Db1 databases, which offer a manual segmentation of the vascular tree by each of its images. Using this set of images as ground truth, the accuracy of the vessel segmentations obtained for an operating point proposal (established by a single threshold value for each database) was quantified. The overall performance was measured using the area of its receiver operating characteristic curve. The method demonstrated robustness in the face of the variability of the fundus images of diverse origin, being capable of working with the highest level of accuracy in the entire set of possible points of operation, compared to those provided by the most accurate methods found in literature. CONCLUSIONS The analysis of results concludes that the proposed method reaches better performance than the rest of state-of-art methods and can be considered the most promising for integration into a real tool for vascular structure segmentation.
Collapse
Affiliation(s)
- Manuel E Gegundez-Arias
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Diego Marin-Santos
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Isaac Perez-Borrero
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| | - Manuel J Vasallo-Vazquez
- Vision, Prediction, Optimisation and Control Systems Department, Science and Technology Research Centre, University of Huelva, Avenida de las Fuerzas Armadas s/n, 21007, Huelva, Spain.
| |
Collapse
|