1
|
Santarossa M, Beyer TT, Scharf ABA, Tatli A, von der Burchard C, Nazarenus J, Roider JB, Koch R. When Two Eyes Don't Suffice-Learning Difficult Hyperfluorescence Segmentations in Retinal Fundus Autofluorescence Images via Ensemble Learning. J Imaging 2024; 10:116. [PMID: 38786570 PMCID: PMC11122615 DOI: 10.3390/jimaging10050116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/03/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
Hyperfluorescence (HF) and reduced autofluorescence (RA) are important biomarkers in fundus autofluorescence images (FAF) for the assessment of health of the retinal pigment epithelium (RPE), an important indicator of disease progression in geographic atrophy (GA) or central serous chorioretinopathy (CSCR). Autofluorescence images have been annotated by human raters, but distinguishing biomarkers (whether signals are increased or decreased) from the normal background proves challenging, with borders being particularly open to interpretation. Consequently, significant variations emerge among different graders, and even within the same grader during repeated annotations. Tests on in-house FAF data show that even highly skilled medical experts, despite previously discussing and settling on precise annotation guidelines, reach a pair-wise agreement measured in a Dice score of no more than 63-80% for HF segmentations and only 14-52% for RA. The data further show that the agreement of our primary annotation expert with herself is a 72% Dice score for HF and 51% for RA. Given these numbers, the task of automated HF and RA segmentation cannot simply be refined to the improvement in a segmentation score. Instead, we propose the use of a segmentation ensemble. Learning from images with a single annotation, the ensemble reaches expert-like performance with an agreement of a 64-81% Dice score for HF and 21-41% for RA with all our experts. In addition, utilizing the mean predictions of the ensemble networks and their variance, we devise ternary segmentations where FAF image areas are labeled either as confident background, confident HF, or potential HF, ensuring that predictions are reliable where they are confident (97% Precision), while detecting all instances of HF (99% Recall) annotated by all experts.
Collapse
Affiliation(s)
- Monty Santarossa
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| | - Tebbo Tassilo Beyer
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| | | | - Ayse Tatli
- Department of Ophthalmology, Kiel University, 24118 Kiel, Germany; (A.B.A.S.); (A.T.); (C.v.d.B.); (J.B.R.)
| | - Claus von der Burchard
- Department of Ophthalmology, Kiel University, 24118 Kiel, Germany; (A.B.A.S.); (A.T.); (C.v.d.B.); (J.B.R.)
| | - Jakob Nazarenus
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| | - Johann Baptist Roider
- Department of Ophthalmology, Kiel University, 24118 Kiel, Germany; (A.B.A.S.); (A.T.); (C.v.d.B.); (J.B.R.)
| | - Reinhard Koch
- Department of Computer Science, Kiel University, 24118 Kiel, Germany; (T.T.B.); (J.N.); (R.K.)
| |
Collapse
|
2
|
Bhatia S, Alam S, Shuaib M, Hameed Alhameed M, Jeribi F, Alsuwailem RI. Retinal Vessel Extraction via Assisted Multi-Channel Feature Map and U-Net. Front Public Health 2022; 10:858327. [PMID: 35372222 PMCID: PMC8968759 DOI: 10.3389/fpubh.2022.858327] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 02/04/2022] [Indexed: 11/13/2022] Open
Abstract
Early detection of vessels from fundus images can effectively prevent the permanent retinal damages caused by retinopathies such as glaucoma, hyperextension, and diabetes. Concerning the red color of both retinal vessels and background and the vessel's morphological variations, the current vessel detection methodologies fail to segment thin vessels and discriminate them in the regions where permanent retinopathies mainly occur. This research aims to suggest a novel approach to take the benefit of both traditional template-matching methods with recent deep learning (DL) solutions. These two methods are combined in which the response of a Cauchy matched filter is used to replace the noisy red channel of the fundus images. Consequently, a U-shaped fully connected convolutional neural network (U-net) is employed to train end-to-end segmentation of pixels into vessel and background classes. Each preprocessed image is divided into several patches to provide enough training images and speed up the training per each instance. The DRIVE public database has been analyzed to test the proposed method, and metrics such as Accuracy, Precision, Sensitivity and Specificity have been measured for evaluation. The evaluation indicates that the average extraction accuracy of the proposed model is 0.9640 on the employed dataset.
Collapse
Affiliation(s)
- Surbhi Bhatia
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
- *Correspondence: Surbhi Bhatia
| | - Shadab Alam
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Mohammed Shuaib
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | | | - Fathe Jeribi
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Razan Ibrahim Alsuwailem
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
| |
Collapse
|
3
|
Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques. SENSORS 2022; 22:s22051803. [PMID: 35270949 PMCID: PMC8914671 DOI: 10.3390/s22051803] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 02/16/2022] [Accepted: 02/17/2022] [Indexed: 01/27/2023]
Abstract
Diabetic Retinopathy (DR) is a predominant cause of visual impairment and loss. Approximately 285 million worldwide population is affected with diabetes, and one-third of these patients have symptoms of DR. Specifically, it tends to affect the patients with 20 years or more with diabetes, but it can be reduced by early detection and proper treatment. Diagnosis of DR by using manual methods is a time-consuming and expensive task which involves trained ophthalmologists to observe and evaluate DR using digital fundus images of the retina. This study aims to systematically find and analyze high-quality research work for the diagnosis of DR using deep learning approaches. This research comprehends the DR grading, staging protocols and also presents the DR taxonomy. Furthermore, identifies, compares, and investigates the deep learning-based algorithms, techniques, and, methods for classifying DR stages. Various publicly available dataset used for deep learning have also been analyzed and dispensed for descriptive and empirical understanding for real-time DR applications. Our in-depth study shows that in the last few years there has been an increasing inclination towards deep learning approaches. 35% of the studies have used Convolutional Neural Networks (CNNs), 26% implemented the Ensemble CNN (ECNN) and, 13% Deep Neural Networks (DNN) are amongst the most used algorithms for the DR classification. Thus using the deep learning algorithms for DR diagnostics have future research potential for DR early detection and prevention based solution.
Collapse
|
4
|
Hu D, Cui C, Li H, Larson KE, Tao YK, Oguz I. LIFE: A Generalizable Autodidactic Pipeline for 3D OCT-A Vessel Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12901:514-524. [PMID: 34950935 PMCID: PMC8692169 DOI: 10.1007/978-3-030-87193-2_49] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Optical coherence tomography (OCT) is a non-invasive imaging technique widely used for ophthalmology. It can be extended to OCT angiography (OCT-A), which reveals the retinal vasculature with improved contrast. Recent deep learning algorithms produced promising vascular segmentation results; however, 3D retinal vessel segmentation remains difficult due to the lack of manually annotated training data. We propose a learning-based method that is only supervised by a self-synthesized modality named local intensity fusion (LIF). LIF is a capillary-enhanced volume computed directly from the input OCT-A. We then construct the local intensity fusion encoder (LIFE) to map a given OCT-A volume and its LIF counterpart to a shared latent space. The latent space of LIFE has the same dimensions as the input data and it contains features common to both modalities. By binarizing this latent space, we obtain a volumetric vessel segmentation. Our method is evaluated in a human fovea OCT-A and three zebrafish OCT-A volumes with manual labels. It yields a Dice score of 0.7736 on human data and 0.8594 ± 0.0275 on zebrafish data, a dramatic improvement over existing unsupervised algorithms.
Collapse
Affiliation(s)
- Dewei Hu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| | - Can Cui
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| | - Hao Li
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| | - Kathleen E Larson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Yuankai K Tao
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ipek Oguz
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, USA
| |
Collapse
|
5
|
Li K, Qi X, Luo Y, Yao Z, Zhou X, Sun M. Accurate Retinal Vessel Segmentation in Color Fundus Images via Fully Attention-Based Networks. IEEE J Biomed Health Inform 2021; 25:2071-2081. [PMID: 33001809 DOI: 10.1109/jbhi.2020.3028180] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based Network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel (SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.
Collapse
|
6
|
Tahir W, Kura S, Zhu J, Cheng X, Damseh R, Tadesse F, Seibel A, Lee BS, Lesage F, Sakadžic S, Boas DA, Tian L. Anatomical Modeling of Brain Vasculature in Two-Photon Microscopy by Generalizable Deep Learning. BME FRONTIERS 2020; 2020:8620932. [PMID: 37849965 PMCID: PMC10521669 DOI: 10.34133/2020/8620932] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Accepted: 11/12/2020] [Indexed: 10/19/2023] Open
Abstract
Objective and Impact Statement. Segmentation of blood vessels from two-photon microscopy (2PM) angiograms of brains has important applications in hemodynamic analysis and disease diagnosis. Here, we develop a generalizable deep learning technique for accurate 2PM vascular segmentation of sizable regions in mouse brains acquired from multiple 2PM setups. The technique is computationally efficient, thus ideal for large-scale neurovascular analysis. Introduction. Vascular segmentation from 2PM angiograms is an important first step in hemodynamic modeling of brain vasculature. Existing segmentation methods based on deep learning either lack the ability to generalize to data from different imaging systems or are computationally infeasible for large-scale angiograms. In this work, we overcome both these limitations by a method that is generalizable to various imaging systems and is able to segment large-scale angiograms. Methods. We employ a computationally efficient deep learning framework with a loss function that incorporates a balanced binary-cross-entropy loss and total variation regularization on the network's output. Its effectiveness is demonstrated on experimentally acquired in vivo angiograms from mouse brains of dimensions up to 808 × 808 × 702 μ m . Results. To demonstrate the superior generalizability of our framework, we train on data from only one 2PM microscope and demonstrate high-quality segmentation on data from a different microscope without any network tuning. Overall, our method demonstrates 10× faster computation in terms of voxels-segmented-per-second and 3× larger depth compared to the state-of-the-art. Conclusion. Our work provides a generalizable and computationally efficient anatomical modeling framework for brain vasculature, which consists of deep learning-based vascular segmentation followed by graphing. It paves the way for future modeling and analysis of hemodynamic response at much greater scales that were inaccessible before.
Collapse
Affiliation(s)
- Waleed Tahir
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - Sreekanth Kura
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Jiabei Zhu
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - Xiaojun Cheng
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Rafat Damseh
- Biomedical Engineering Institute, École Polytechnique de Montréal, Montréal, QC, Canada
| | - Fetsum Tadesse
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Alex Seibel
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Blaire S. Lee
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Institute of Neurological Sciences and Psychiatry, Hacettepe University, Ankara, Turkey
| | - Frédéric Lesage
- Biomedical Engineering Institute, École Polytechnique de Montréal, Montréal, QC, Canada
| | - Sava Sakadžic
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, USA
| | - David A. Boas
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Neurophotonics Center, Boston University, Boston, MA, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
- Neurophotonics Center, Boston University, Boston, MA, USA
| |
Collapse
|
7
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
8
|
Islam MM, Poly TN, Walther BA, Yang HC, Li YC(J. Artificial Intelligence in Ophthalmology: A Meta-Analysis of Deep Learning Models for Retinal Vessels Segmentation. J Clin Med 2020; 9:E1018. [PMID: 32260311 PMCID: PMC7231106 DOI: 10.3390/jcm9041018] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 03/27/2020] [Accepted: 03/28/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND AND OBJECTIVE Accurate retinal vessel segmentation is often considered to be a reliable biomarker of diagnosis and screening of various diseases, including cardiovascular diseases, diabetic, and ophthalmologic diseases. Recently, deep learning (DL) algorithms have demonstrated high performance in segmenting retinal images that may enable fast and lifesaving diagnoses. To our knowledge, there is no systematic review of the current work in this research area. Therefore, we performed a systematic review with a meta-analysis of relevant studies to quantify the performance of the DL algorithms in retinal vessel segmentation. METHODS A systematic search on EMBASE, PubMed, Google Scholar, Scopus, and Web of Science was conducted for studies that were published between 1 January 2000 and 15 January 2020. We followed the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) procedure. The DL-based study design was mandatory for a study's inclusion. Two authors independently screened all titles and abstracts against predefined inclusion and exclusion criteria. We used the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool for assessing the risk of bias and applicability. RESULTS Thirty-one studies were included in the systematic review; however, only 23 studies met the inclusion criteria for the meta-analysis. DL showed high performance for four publicly available databases, achieving an average area under the ROC of 0.96, 0.97, 0.96, and 0.94 on the DRIVE, STARE, CHASE_DB1, and HRF databases, respectively. The pooled sensitivity for the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.77, 0.79, 0.78, and 0.81, respectively. Moreover, the pooled specificity of the DRIVE, STARE, CHASE_DB1, and HRF databases was 0.97, 0.97, 0.97, and 0.92, respectively. CONCLUSION The findings of our study showed the DL algorithms had high sensitivity and specificity for segmenting the retinal vessels from digital fundus images. The future role of DL algorithms in retinal vessel segmentation is promising, especially for those countries with limited access to healthcare. More compressive studies and global efforts are mandatory for evaluating the cost-effectiveness of DL-based tools for retinal disease screening worldwide.
Collapse
Affiliation(s)
- Md. Mohaimenul Islam
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Bruno Andreas Walther
- Department of Biological Sciences, National Sun Yat-Sen University, Gushan District, Kaohsiung City 804, Taiwan;
| | - Hsuan Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Yu-Chuan (Jack) Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; (M.M.I.); (T.N.P.); (H.C.Y.)
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan
- Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
- Department of Dermatology, Wan Fang Hospital, Taipei 110, Taiwan
- TMU Research Center of Cancer Translational Medicine, Taipei Medical University, Taipei 110, Taiwan
| |
Collapse
|
9
|
Adapa D, Joseph Raj AN, Alisetti SN, Zhuang Z, K. G, Naik G. A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features. PLoS One 2020; 15:e0229831. [PMID: 32142540 PMCID: PMC7059933 DOI: 10.1371/journal.pone.0229831] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/16/2020] [Indexed: 11/18/2022] Open
Abstract
This paper proposes a new supervised method for blood vessel segmentation using Zernike moment-based shape descriptors. The method implements a pixel wise classification by computing a 11-D feature vector comprising of both statistical (gray-level) features and shape-based (Zernike moment) features. Also the feature set contains optimal coefficients of the Zernike Moments which were derived based on the maximum differentiability between the blood vessel and background pixels. A manually selected training points obtained from the training set of the DRIVE dataset, covering all possible manifestations were used for training the ANN-based binary classifier. The method was evaluated on unknown test samples of DRIVE and STARE databases and returned accuracies of 0.945 and 0.9486 respectively, outperforming other existing supervised learning methods. Further, the segmented outputs were able to cover thinner blood vessels better than previous methods, aiding in early detection of pathologies.
Collapse
Affiliation(s)
- Dharmateja Adapa
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Sai Nikhil Alisetti
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Ganesan K.
- TIFAC-CORE, School of Electronics, Vellore Institute of Technology, Vellore, India
| | - Ganesh Naik
- MARCS Institute, Western Sydney University, Australia
| |
Collapse
|
10
|
Recent Development on Detection Methods for the Diagnosis of Diabetic Retinopathy. Symmetry (Basel) 2019. [DOI: 10.3390/sym11060749] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that exists throughout the world. DR occurs due to a high ratio of glucose in the blood, which causes alterations in the retinal microvasculature. Without preemptive symptoms of DR, it leads to complete vision loss. However, early screening through computer-assisted diagnosis (CAD) tools and proper treatment have the ability to control the prevalence of DR. Manual inspection of morphological changes in retinal anatomic parts are tedious and challenging tasks. Therefore, many CAD systems were developed in the past to assist ophthalmologists for observing inter- and intra-variations. In this paper, a recent review of state-of-the-art CAD systems for diagnosis of DR is presented. We describe all those CAD systems that have been developed by various computational intelligence and image processing techniques. The limitations and future trends of current CAD systems are also described in detail to help researchers. Moreover, potential CAD systems are also compared in terms of statistical parameters to quantitatively evaluate them. The comparison results indicate that there is still a need for accurate development of CAD systems to assist in the clinical diagnosis of diabetic retinopathy.
Collapse
|
11
|
Halicek M, Fabelo H, Ortega S, Callico GM, Fei B. In-Vivo and Ex-Vivo Tissue Analysis through Hyperspectral Imaging Techniques: Revealing the Invisible Features of Cancer. Cancers (Basel) 2019; 11:E756. [PMID: 31151223 PMCID: PMC6627361 DOI: 10.3390/cancers11060756] [Citation(s) in RCA: 96] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Revised: 05/20/2019] [Accepted: 05/24/2019] [Indexed: 12/27/2022] Open
Abstract
In contrast to conventional optical imaging modalities, hyperspectral imaging (HSI) is able to capture much more information from a certain scene, both within and beyond the visual spectral range (from 400 to 700 nm). This imaging modality is based on the principle that each material provides different responses to light reflection, absorption, and scattering across the electromagnetic spectrum. Due to these properties, it is possible to differentiate and identify the different materials/substances presented in a certain scene by their spectral signature. Over the last two decades, HSI has demonstrated potential to become a powerful tool to study and identify several diseases in the medical field, being a non-contact, non-ionizing, and a label-free imaging modality. In this review, the use of HSI as an imaging tool for the analysis and detection of cancer is presented. The basic concepts related to this technology are detailed. The most relevant, state-of-the-art studies that can be found in the literature using HSI for cancer analysis are presented and summarized, both in-vivo and ex-vivo. Lastly, we discuss the current limitations of this technology in the field of cancer detection, together with some insights into possible future steps in the improvement of this technology.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Department of Biomedical Engineering, Emory University and The Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, GA 30329, USA.
| | - Himar Fabelo
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Samuel Ortega
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Gustavo M Callico
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Advanced Imaging Research Center, University of Texas Southwestern Medical Center, 5323 Harry Hine Blvd, Dallas, TX 75390, USA.
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hine Blvd, Dallas, TX 75390, USA.
| |
Collapse
|
12
|
Construction of Retinal Vessel Segmentation Models Based on Convolutional Neural Network. Neural Process Lett 2019. [DOI: 10.1007/s11063-019-10011-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
13
|
Abstract
The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction, and intervention. Deep learning is a representation learning method that consists of layers that transform data nonlinearly, thus, revealing hierarchical relationships and structures. In this review, we survey deep learning application papers that use structured data, and signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.
Collapse
|
14
|
Simultaneous Segmentation of Multiple Retinal Pathologies Using Fully Convolutional Deep Neural Network. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-319-95921-4_29] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023]
|
15
|
Almotiri J, Elleithy K, Elleithy A. A Multi-Anatomical Retinal Structure Segmentation System for Automatic Eye Screening Using Morphological Adaptive Fuzzy Thresholding. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2018; 6:3800123. [PMID: 29888146 PMCID: PMC5991867 DOI: 10.1109/jtehm.2018.2835315] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 04/10/2018] [Accepted: 05/02/2018] [Indexed: 11/06/2022]
Abstract
Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue for detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc, and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This paper proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc, and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogeneous anatomical structures.
Collapse
Affiliation(s)
- Jasem Almotiri
- Computer Science DepartmentUniversity of BridgeportBridgeportCT06604USA
| | - Khaled Elleithy
- Computer Science DepartmentUniversity of BridgeportBridgeportCT06604USA
| | | |
Collapse
|
16
|
S K S, P A. A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy. J Med Syst 2017; 41:201. [PMID: 29124453 DOI: 10.1007/s10916-017-0853-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 10/29/2017] [Indexed: 01/02/2023]
Abstract
The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).
Collapse
Affiliation(s)
- Somasundaram S K
- Department of Computer Science and Engineering, PSNA College of Engineering and Technology, Dindigul, India.
| | - Alli P
- Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai, Tamil Nadu, India
| |
Collapse
|