1
|
Ghislain F, Beaudelaire ST, Daniel T. An improved semi-supervised segmentation of the retinal vasculature using curvelet-based contrast adjustment and generalized linear model. Heliyon 2024; 10:e38027. [PMID: 39347436 PMCID: PMC11437861 DOI: 10.1016/j.heliyon.2024.e38027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 08/12/2024] [Accepted: 09/16/2024] [Indexed: 10/01/2024] Open
Abstract
Diagnosis of most ophthalmic conditions, such as diabetic retinopathy, generally relies on an effective analysis of retinal blood vessels. Techniques that depend solely on the visual observation of clinicians can be tedious and prone to numerous errors. In this article, we propose a semi-supervised automated approach for segmenting blood vessels in retinal color images. Our method effectively combines some classical filters with a Generalized Linear Model (GLM). We first apply the Curvelet Transform along with the Contrast-Limited Histogram Adaptive Equalization (CLAHE) technique to significantly enhance the contrast of vessels in the retinal image during the preprocessing phase. We then use Gabor transform to extract features from the enhanced image. For retinal vasculature identification, we use a GLM learning model with a simple link identity function. Binarization is then performed using an automatic optimal threshold based on the maximum Youden index. A morphological cleaning operation is applied to remove isolated or unwanted segments from the final segmented image. The proposed model is evaluated using statistical parameters on images from three publicly available databases. We achieve average accuracies of 0.9593, 0.9553 and 0.9643, with Receiver Operating Characteristic (ROC) analysis yielding Area Under Curve (AUC) values of 0.9722, 0.9682 and 0.9767 for the CHASE_DB1, STARE and DRIVE databases, respectively. Compared to some of the best results from similar approaches published recently, our results exceed their performance on several datasets.
Collapse
Affiliation(s)
- Feudjio Ghislain
- Research Unit of Condensed Matter, Electronics and Signal Processing (UR-MACETS). Department of Physics, Faculty of Sciences, University of Dschang, P.O. Box 67, Dschang, Cameroon
- Research Unit of Automation and Applied Computer (UR-AIA), Electrical Engineering Department of IUT-FV, University of Dschang, P.O. Box: 134, Bandjoun, Cameroon
| | - Saha Tchinda Beaudelaire
- Research Unit of Automation and Applied Computer (UR-AIA), Electrical Engineering Department of IUT-FV, University of Dschang, P.O. Box: 134, Bandjoun, Cameroon
| | - Tchiotsop Daniel
- Research Unit of Automation and Applied Computer (UR-AIA), Electrical Engineering Department of IUT-FV, University of Dschang, P.O. Box: 134, Bandjoun, Cameroon
| |
Collapse
|
2
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
3
|
Prethija G, Katiravan J. EAMR-Net: A multiscale effective spatial and cross-channel attention network for retinal vessel segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4742-4761. [PMID: 38549347 DOI: 10.3934/mbe.2024208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Delineation of retinal vessels in fundus images is essential for detecting a range of eye disorders. An automated technique for vessel segmentation can assist clinicians and enhance the efficiency of the diagnostic process. Traditional methods fail to extract multiscale information, discard unnecessary information, and delineate thin vessels. In this paper, a novel residual U-Net architecture that incorporates multi-scale feature learning and effective attention is proposed to delineate the retinal vessels precisely. Since drop block regularization performs better than drop out in preventing overfitting, drop block was used in this study. A multi-scale feature learning module was added instead of a skip connection to learn multi-scale features. A novel effective attention block was proposed and integrated with the decoder block to obtain precise spatial and channel information. Experimental findings indicated that the proposed model exhibited outstanding performance in retinal vessel delineation. The sensitivities achieved for DRIVE, STARE, and CHASE_DB datasets were 0.8293, 0.8151 and 0.8084, respectively.
Collapse
Affiliation(s)
- G Prethija
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Jeevaa Katiravan
- Department of Information Technology, Velammal Engineering College, Chennai 600066, India
| |
Collapse
|
4
|
Sun K, Chen Y, Dong F, Wu Q, Geng J, Chen Y. Retinal vessel segmentation method based on RSP-SA Unet network. Med Biol Eng Comput 2024; 62:605-620. [PMID: 37964177 DOI: 10.1007/s11517-023-02960-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 10/28/2023] [Indexed: 11/16/2023]
Abstract
Segmenting retinal vessels plays a significant role in the diagnosis of fundus disorders. However, there are two problems in the retinal vessel segmentation methods. First, fine-grained features of fine blood vessels are difficult to be extracted. Second, it is easy to lose track of the details of blood vessel edges. To solve the problems above, the Residual SimAM Pyramid-Spatial Attention Unet (RSP-SA Unet) is proposed, in which the encoding, decoding, and upsampling layers of the Unet are mainly improved. Firstly, the RSP structure proposed in this paper approximates a residual structure combined with SimAM and Pyramid Segmentation Attention (PSA), which is applied to the encoding and decoding parts to extract multi-scale spatial information and important features across dimensions at a finer level. Secondly, the spatial attention (SA) is used in the upsampling layer to perform multi-attention mapping on the input feature map, which could enhance the segmentation effect of small blood vessels with low contrast. Finally, the RSP-SA Unet is verified on the CHASE_DB1, DRIVE, and STARE datasets, and the segmentation accuracy (ACC) of the RSP-SA Unet could reach 0.9763, 0.9704, and 0.9724, respectively. Area under the ROC curve (AUC) could reach 0.9896, 0.9858, and 0.9906, respectively. The RSP-SA Unet overall performance is better than the comparison methods.
Collapse
Affiliation(s)
- Kun Sun
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Yang Chen
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Fuxuan Dong
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Qing Wu
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China.
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China.
- Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin, China.
| | - Jiameng Geng
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Yinsheng Chen
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
5
|
Alsayat A, Elmezain M, Alanazi S, Alruily M, Mostafa AM, Said W. Multi-Layer Preprocessing and U-Net with Residual Attention Block for Retinal Blood Vessel Segmentation. Diagnostics (Basel) 2023; 13:3364. [PMID: 37958260 PMCID: PMC10648654 DOI: 10.3390/diagnostics13213364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/21/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Retinal blood vessel segmentation is a valuable tool for clinicians to diagnose conditions such as atherosclerosis, glaucoma, and age-related macular degeneration. This paper presents a new framework for segmenting blood vessels in retinal images. The framework has two stages: a multi-layer preprocessing stage and a subsequent segmentation stage employing a U-Net with a multi-residual attention block. The multi-layer preprocessing stage has three steps. The first step is noise reduction, employing a U-shaped convolutional neural network with matrix factorization (CNN with MF) and detailed U-shaped U-Net (D_U-Net) to minimize image noise, culminating in the selection of the most suitable image based on the PSNR and SSIM values. The second step is dynamic data imputation, utilizing multiple models for the purpose of filling in missing data. The third step is data augmentation through the utilization of a latent diffusion model (LDM) to expand the training dataset size. The second stage of the framework is segmentation, where the U-Nets with a multi-residual attention block are used to segment the retinal images after they have been preprocessed and noise has been removed. The experiments show that the framework is effective at segmenting retinal blood vessels. It achieved Dice scores of 95.32, accuracy of 93.56, precision of 95.68, and recall of 95.45. It also achieved efficient results in removing noise using CNN with matrix factorization (MF) and D-U-NET according to values of PSNR and SSIM for (0.1, 0.25, 0.5, and 0.75) levels of noise. The LDM achieved an inception score of 13.6 and an FID of 46.2 in the augmentation step.
Collapse
Affiliation(s)
- Ahmed Alsayat
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Mahmoud Elmezain
- Computer Science Division, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| | - Saad Alanazi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Meshrif Alruily
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Ayman Mohamed Mostafa
- Information Systems Department, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| |
Collapse
|
6
|
Ong J, Waisberg E, Masalkhi M, Kamran SA, Lowry K, Sarker P, Zaman N, Paladugu P, Tavakkoli A, Lee AG. Artificial Intelligence Frameworks to Detect and Investigate the Pathophysiology of Spaceflight Associated Neuro-Ocular Syndrome (SANS). Brain Sci 2023; 13:1148. [PMID: 37626504 PMCID: PMC10452366 DOI: 10.3390/brainsci13081148] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 07/24/2023] [Accepted: 07/28/2023] [Indexed: 08/27/2023] Open
Abstract
Spaceflight associated neuro-ocular syndrome (SANS) is a unique phenomenon that has been observed in astronauts who have undergone long-duration spaceflight (LDSF). The syndrome is characterized by distinct imaging and clinical findings including optic disc edema, hyperopic refractive shift, posterior globe flattening, and choroidal folds. SANS serves a large barrier to planetary spaceflight such as a mission to Mars and has been noted by the National Aeronautics and Space Administration (NASA) as a high risk based on its likelihood to occur and its severity to human health and mission performance. While it is a large barrier to future spaceflight, the underlying etiology of SANS is not well understood. Current ophthalmic imaging onboard the International Space Station (ISS) has provided further insights into SANS. However, the spaceflight environment presents with unique challenges and limitations to further understand this microgravity-induced phenomenon. The advent of artificial intelligence (AI) has revolutionized the field of imaging in ophthalmology, particularly in detection and monitoring. In this manuscript, we describe the current hypothesized pathophysiology of SANS and the medical diagnostic limitations during spaceflight to further understand its pathogenesis. We then introduce and describe various AI frameworks that can be applied to ophthalmic imaging onboard the ISS to further understand SANS including supervised/unsupervised learning, generative adversarial networks, and transfer learning. We conclude by describing current research in this area to further understand SANS with the goal of enabling deeper insights into SANS and safer spaceflight for future missions.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI 48105, USA
| | | | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland
| | - Sharif Amit Kamran
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | | | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Phani Paladugu
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, NV 89512, USA
| | - Andrew G. Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX 77030, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY 10065, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX 77555, USA
- University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Texas A&M College of Medicine, Bryan, TX 77030, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA 50010, USA
| |
Collapse
|
7
|
Retinal image blood vessel classification using hybrid deep learning in cataract diseased fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
8
|
Sindhusaranya B, Geetha MR. Retinal blood vessel segmentation using root Guided decision tree assisted enhanced Fuzzy C-mean clustering for disease identification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
9
|
Li M, Ling R, Yu L, Yang W, Chen Z, Wu D, Zhang J. Deep Learning Segmentation and Reconstruction for CT of Chronic Total Coronary Occlusion. Radiology 2023; 306:e221393. [PMID: 36283114 DOI: 10.1148/radiol.221393] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Background CT imaging of chronic total occlusion (CTO) is useful in guiding revascularization, but manual reconstruction and quantification are time consuming. Purpose To develop and validate a deep learning (DL) model for automated CTO reconstruction. Materials and Methods In this retrospective study, a DL model for automated CTO segmentation and reconstruction was developed using coronary CT angiography images from a training set of 6066 patients (582 with CTO, 5484 without CTO) and a validation set of 1962 patients (208 with CTO, 1754 without CTO). The algorithm was validated using an external test set of 211 patients with CTO. The consistency and measurement agreement of CTO quantification were compared between the DL model and the conventional manual protocol using the intraclass correlation coefficient, Cohen κ coefficient, and Bland-Altman plot. The predictive values of CT-derived Multicenter CTO Registry of Japan (J-CTO) score for revascularization success were evaluated. Results In the external test set, 211 patients (mean age, 66 years ± 11 [SD]; 164 men) with 240 CTO lesions were evaluated. Automated segmentation and reconstruction of CTOs by DL was successful in 95% of lesions (228 of 240) without manual editing and in 48% of lesions (116 of 240) with the conventional manual protocol (P < .001). The total postprocessing and measurement time was shorter for DL than for manual reconstruction (mean, 121 seconds ± 20 vs 456 seconds ± 68; P < .001). The quantitative and qualitative CTO parameters evaluated with the two methods showed excellent correlation (all correlation coefficients > 0.85, all P < .001) and minimal measurement difference. The predictive values of J-CTO score derived from DL and conventional manual quantification for procedure success showed no difference (area under the receiver operating characteristic curve, 0.76 [95% CI: 0.69, 0.82] and 0.76 [95% CI: 0.69, 0.82], respectively; P = .55). Conclusion When compared with manual reconstruction, the deep learning model considerably reduced postprocessing time for chronic total occlusion quantification and had excellent correlation and agreement in the anatomic assessment of occlusion features. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Loewe in this issue.
Collapse
Affiliation(s)
- Meiling Li
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| | - Runjianya Ling
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| | - Lihua Yu
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| | - Wenyi Yang
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| | - Zirong Chen
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| | - Dijia Wu
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| | - Jiayin Zhang
- From the Departments of Radiology (M.L., LY., J.Z.) and Cardiology (W.Y.), Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, 85 Wujin Rd, Shanghai 200080, China; Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, China (R.L.); and Shanghai United Imaging Intelligence, Shanghai, China (Z.C., D.W.)
| |
Collapse
|
10
|
Retinal blood vessel segmentation by using the MS-LSDNet network and geometric skeleton reconnection method. Comput Biol Med 2023; 153:106416. [PMID: 36586230 DOI: 10.1016/j.compbiomed.2022.106416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 11/21/2022] [Accepted: 12/04/2022] [Indexed: 12/29/2022]
Abstract
Automatic retinal blood vessel segmentation is a key link in the diagnosis of ophthalmic diseases. Recent deep learning methods have achieved high accuracy in vessel segmentation but still face challenges in maintaining vascular structural connectivity. Therefore, this paper proposes a novel retinal blood vessel segmentation strategy that includes three stages: vessel structure detection, vessel branch extraction and broken vessel segment reconnection. First, we propose a multiscale linear structure detection network (MS-LSDNet), which improves the detection ability of fine blood vessels by learning the types of rich hierarchical features. In addition, to maintain the connectivity of the vascular structure in the process of binarization of the vascular probability map, an adaptive hysteresis threshold method for vascular extraction is proposed. Finally, we propose a vascular tree structure reconstruction algorithm based on a geometric skeleton to connect the broken vessel segments. Experimental results on three publicly available datasets show that compared with current state-of-the-art algorithms, our strategy effectively maintains the connectivity of retinal vascular tree structure.
Collapse
|
11
|
Luo X, Wang W, Xu Y, Lai Z, Jin X, Zhang B, Zhang D. A deep convolutional neural network for diabetic retinopathy detection via mining local and long‐range dependence. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Affiliation(s)
- Xiaoling Luo
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
| | - Wei Wang
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
| | - Yong Xu
- Shenzhen Key Laboratory of Visual Object Detection and Recognition Harbin Institute of Technology Shenzhen China
- Peng Cheng Laboratory Shenzhen China
| | - Zhihui Lai
- Shenzhen Institute of Artificial Intelligence and Robotics for Society Shenzhen China
| | - Xiaopeng Jin
- College of Big Data and Internet Shenzhen Technology University Shenzhen China
| | - Bob Zhang
- The Department of Computer and Information Science University of Macau Macao Macau
| | - David Zhang
- The Chinese University of Hong Kong (Shenzhen) Shenzhen China
| |
Collapse
|
12
|
RADCU-Net: residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01715-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Retinal vessel segmentation based on self-distillation and implicit neural representation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04252-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
14
|
Parra-Mora E, da Silva Cruz LA. LOCTseg: A lightweight fully convolutional network for end-to-end optical coherence tomography segmentation. Comput Biol Med 2022; 150:106174. [PMID: 36252364 DOI: 10.1016/j.compbiomed.2022.106174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 08/31/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
This article presents a novel end-to-end automatic solution for semantic segmentation of optical coherence tomography (OCT) images. OCT is a non-invasive imaging technology widely used in clinical practice due to its ability to acquire high-resolution cross-sectional images of the ocular fundus. Due to the large variability of the retinal structures, OCT segmentation is usually carried out manually and requires expert knowledge. This study introduces a novel fully convolutional network (FCN) architecture designated by LOCTSeg, for end-to-end automatic segmentation of diagnostic markers in OCT b-scans. LOCTSeg is a lightweight deep FCN optimized for balancing performance and efficiency. Unlike state-of-the-art FCNs used in image segmentation, LOCTSeg achieves competitive inference speed without sacrificing segmentation accuracy. The proposed LOCTSeg is evaluated on two publicly available benchmarking datasets: (1) annotated retinal OCT image database (AROI) comprising 1136 images, and (2) healthy controls and multiple sclerosis lesions (HCMS) consisting of 1715 images. Moreover, we evaluated the proposed LOCTSeg with a private dataset of 250 OCT b-scans acquired from epiretinal membrane (ERM) and healthy patients. Results of the evaluation demonstrate empirically the effectiveness of the proposed algorithm, which improves the state-of-the-art Dice score from 69% to 73% and from 91% to 92% on AROI and HCMS datasets, respectively. Furthermore, LOCTSeg outperforms comparable lightweight FCNs' Dice score by margins between 4% and 15% on ERM segmentation.
Collapse
Affiliation(s)
- Esther Parra-Mora
- Department of Electrical and Computer Engineering, University of Coimbra, Coimbra, 3030-290, Portugal; Instituto de Telecomunicações, Coimbra, 3030-290, Portugal.
| | - Luís A da Silva Cruz
- Department of Electrical and Computer Engineering, University of Coimbra, Coimbra, 3030-290, Portugal; Instituto de Telecomunicações, Coimbra, 3030-290, Portugal.
| |
Collapse
|
15
|
Panda NR, Sahoo AK. A Detailed Systematic Review on Retinal Image Segmentation Methods. J Digit Imaging 2022; 35:1250-1270. [PMID: 35508746 PMCID: PMC9582172 DOI: 10.1007/s10278-022-00640-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 04/12/2022] [Accepted: 04/14/2022] [Indexed: 11/27/2022] Open
Abstract
The separation of blood vessels in the retina is a major aspect in detecting ailment and is carried out by segregating the retinal blood vessels from the fundus images. Moreover, it helps to provide earlier therapy for deadly diseases and prevent further impacts due to diabetes and hypertension. Many reviews already exist for this problem, but those reviews have presented the analysis of a single framework. Hence, this article on retinal segmentation review has revealed distinct methodologies with diverse frameworks that are utilized for blood vessel separation. The novelty of this review research lies in finding the best neural network model by comparing its efficiency. For that, machine learning (ML) and deep learning (DL) were compared and have been reported as the best model. Moreover, different datasets were used to segment the retinal blood vessels. The execution of each approach is compared based on the performance metrics such as sensitivity, specificity, and accuracy using publically accessible datasets like STARE, DRIVE, ROSE, REFUGE, and CHASE. This article discloses the implementation capacity of distinct techniques implemented for each segmentation method. Finally, the finest accuracy of 98% and sensitivity of 96% were achieved for the technique of Convolution Neural Network with Ranking Support Vector Machine (CNN-rSVM). Moreover, this technique has utilized public datasets to verify efficiency. Hence, the overall review of this article has revealed a method for earlier diagnosis of diseases to deliver earlier therapy.
Collapse
Affiliation(s)
- Nihar Ranjan Panda
- Department of Electronics and Communication Engineering, Silicon Institute of Technology, Bhubaneswar, Orissa, 751024, India.
| | - Ajit Kumar Sahoo
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India
| |
Collapse
|
16
|
Khandouzi A, Ariafar A, Mashayekhpour Z, Pazira M, Baleghi Y. Retinal Vessel Segmentation, a Review of Classic and Deep Methods. Ann Biomed Eng 2022; 50:1292-1314. [PMID: 36008569 DOI: 10.1007/s10439-022-03058-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 08/15/2022] [Indexed: 11/01/2022]
Abstract
Retinal illnesses such as diabetic retinopathy (DR) are the main causes of vision loss. In the early recognition of eye diseases, the segmentation of blood vessels in retina images plays an important role. Different symptoms of ocular diseases can be identified by the geometric features of ocular arteries. However, due to the complex construction of the blood vessels and their different thicknesses, segmenting the retina image is a challenging task. There are a number of algorithms that helped the detection of retinal diseases. This paper presents an overview of papers from 2016 to 2022 that discuss machine learning and deep learning methods for automatic vessel segmentation. The methods are divided into two groups: Deep learning-based, and classic methods. Algorithms, classifiers, pre-processing and specific techniques of each group is described, comprehensively. The performances of recent works are compared based on their achieved accuracy in different datasets in inclusive tables. A survey of most popular datasets like DRIVE, STARE, HRF and CHASE_DB1 is also given in this paper. Finally, a list of findings from this review is presented in the conclusion section.
Collapse
Affiliation(s)
- Ali Khandouzi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Ali Ariafar
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Zahra Mashayekhpour
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Milad Pazira
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Yasser Baleghi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran.
| |
Collapse
|
17
|
Assessing vascular complexity of PAOD patients by deep learning-based segmentation and fractal dimension. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07642-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractThe assessment of vascular complexity in the lower limbs provides relevant information about peripheral artery occlusive diseases (PAOD), thus fostering improvements both in therapeutic decisions and prognostic estimation. The current clinical practice consists of visually inspecting and evaluating cine-angiograms of the interested region, which is largely operator-dependent. We present here an automatic method for segmenting the vessel tree and compute a quantitative measure, in terms of fractal dimension (FD), of the vascular complexity. The proposed workflow consists of three main steps: (i) conversion of the cine-angiographies to single static images with a broader field of view, (ii) automatic segmentation of the vascular trees, and (iii) calculation and assessment of FD as complexity index. In particular, this work defines (1) a method to reduce the inter-observer variability in judging vascular complexity in cine-angiography images from patients affected by peripheral artery occlusive disease (PAOD), and (2) the use of Fractal Dimension as a metric of shape complexity of vascular tree. The inter-class correlation coefficient (ICC) is computed as inter-observer agreement metric and to account for possible systematic error, that depends on the experience of the raters. The automatic segmentation of vascular tree achieved an Area Under the Curve mean value of $$0.77~\pm ~0.07$$
0.77
±
0.07
, with a min-max range of $$0.57-0.87$$
0.57
-
0.87
. Absolute operator agreement was higher over the segmented image ($$ICC=0.96$$
I
C
C
=
0.96
) compared to the video ($$ICC=0.76$$
I
C
C
=
0.76
) and the a broader field of view image ($$ICC=0.92$$
I
C
C
=
0.92
). Fractal Dimension computed on both manual segmented images (ground truths) and automatically showed a good correlation with the clinical score (0.85 and 0.75, respectively). Experimental analyses suggest that extracting the vascular tree from cine-angiography can substantially improve the reliability of visual assessment of vascular complexity in PAOD. Results also reveal the effectiveness of FD in evaluating complex vascular tree structures.
Collapse
|
18
|
Sindhusaranya B, Geetha M, Rajesh T, Kavitha M. Hybrid algorithm for retinal blood vessel segmentation using different pattern recognition techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Blood vessel segmentation of the retina has become a necessary step in automatic disease identification and planning treatment in the field of Ophthalmology. To identify the disease properly, both thick and thin blood vessels should be distinguished clearly. Diagnosis of disease would be simple and easier only when the blood vessels are segmented accurately. Existing blood vessel segmentation methods are not supporting well to overcome the poor accuracy and low generalization problems because of the complex blood vessel structure of the retina. In this study, a hybrid algorithm is proposed using binarization, exclusively for segmenting the vessels from a retina image to enhance the exactness and specificity of segmentation of an image. The proposed algorithm extracts the advantages of pattern recognition techniques, such as Matched Filter (MF), Matched Filter with First-order Derivation of Gaussian (MF-FDOG), Multi-Scale Line Detector (MSLD) algorithms and developed as a hybrid algorithm. This algorithm is authenticated with the openly accessible dataset DRIVE. Using Python with OpenCV, the algorithm simulation results had attained an accurateness of 0.9602, a sensitivity of 0.6246, and a specificity of 0.9815 for the dataset. Simulation outcomes proved that the proposed hybrid algorithm accurately segments the blood vessels of the retina compared to the existing methodologies.
Collapse
Affiliation(s)
- B. Sindhusaranya
- Department of Electronics and Communication Engineering, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
| | - M.R. Geetha
- Department of Electronics and Communication Engineering, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
| | - T. Rajesh
- Department of Electronics and Communication Engineering, PSN College of Engineering and Technology, Tirunelveli, Tamil Nadu, India
| | - M.R. Kavitha
- Department of Electronics and Communication Engineering, Ponjesly College of Engineering, Nagercoil, Tamil Nadu, India
| |
Collapse
|
19
|
An Automated Image Segmentation and Useful Feature Extraction Algorithm for Retinal Blood Vessels in Fundus Images. ELECTRONICS 2022. [DOI: 10.3390/electronics11091295] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The manual segmentation of the blood vessels in retinal images has numerous limitations. It is very time consuming and prone to human error, particularly with a very twisted structure of the blood vessel and a vast number of retinal images that needs to be analysed. Therefore, an automatic algorithm for segmenting and extracting useful clinical features from the retinal blood vessels is critical to help ophthalmologists and eye specialists to diagnose different retinal diseases and to assess early treatment. An accurate, rapid, and fully automatic blood vessel segmentation and clinical features measurement algorithm for retinal fundus images is proposed to improve the diagnosis precision and decrease the workload of the ophthalmologists. The main pipeline of the proposed algorithm is composed of two essential stages: image segmentation and clinical features extraction stage. Several comprehensive experiments were carried out to assess the performance of the developed fully automated segmentation algorithm in detecting the retinal blood vessels using two extremely challenging fundus images datasets, named the DRIVE and HRF. Initially, the accuracy of the proposed algorithm was evaluated in terms of adequately detecting the retinal blood vessels. In these experiments, five quantitative performances were measured and calculated to validate the efficiency of the proposed algorithm, which consist of the Acc., Sen., Spe., PPV, and NPV measures compared with current state-of-the-art vessel segmentation approaches on the DRIVE dataset. The results obtained showed a significantly improvement by achieving an Acc., Sen., Spe., PPV, and NPV of 99.55%, 99.93%, 99.09%, 93.45%, and 98.89, respectively.
Collapse
|
20
|
Deng X, Ye J. A retinal blood vessel segmentation based on improved D-MNet and pulse-coupled neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103467] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Li J, Dou Q, Yang H, Liu J, Fu L, Zhang Y, Zheng L, Zhang D. Cervical cell multi-classification algorithm using global context information and attention mechanism. Tissue Cell 2021; 74:101677. [PMID: 34814053 DOI: 10.1016/j.tice.2021.101677] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 11/01/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022]
Abstract
Cervical cancer is the second biggest killer of female cancer, second only to breast cancer. The cure rate of precancerous lesions found early is relatively high. Therefore, cervical cell classification has very important clinical value in the early screening of cervical cancer. This paper proposes a convolutional neural network (L-PCNN) that integrates global context information and attention mechanism to classify cervical cells. The cell image is sent to the improved ResNet-50 backbone network to extract deep learning features. In order to better extract deep features, each convolution block introduces a convolution block attention mechanism to guide the network to focus on the cell area. Then, the end of the backbone network adds a pyramid pooling layer and a long short-term memory module (LSTM) to aggregate image features in different regions. The low-level features and high-level features are integrated, so that the whole network can learn more regional detail features, and solve the problem of network gradient disappearance. The experiment is conducted on the SIPaKMeD public data set. The experimental results show that the accuracy of the proposed l-PCNN in cervical cell accuracy is 98.89 %, the sensitivity is 99.9 %, the specificity is 99.8 % and the F-measure is 99.89 %, which is better than most cervical cell classification models, which proves the effectiveness of the model.
Collapse
Affiliation(s)
- Jun Li
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Qiyan Dou
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Haima Yang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| | - Jin Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Le Fu
- Department of Radiology, Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine, Shanghai, China
| | - Yu Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Lulu Zheng
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Dawei Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| |
Collapse
|
22
|
Bouden A, Blaiech AG, Ben Khalifa K, Ben Abdallah A, Bedoui MH. A Novel Deep Learning Model for COVID-19 Detection from Combined Heterogeneous X-ray and CT Chest Images. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-77211-6_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Mobile Aided System of Deep-Learning Based Cataract Grading from Fundus Images. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-77211-6_40] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|