1
|
He H, Fasoula NA, Karlas A, Omar M, Aguirre J, Lutz J, Kallmayer M, Füchtenbusch M, Eckstein HH, Ziegler A, Ntziachristos V. Opening a window to skin biomarkers for diabetes stage with optoacoustic mesoscopy. LIGHT, SCIENCE & APPLICATIONS 2023; 12:231. [PMID: 37718348 PMCID: PMC10505608 DOI: 10.1038/s41377-023-01275-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 08/10/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
Being the largest and most accessible organ of the human body, the skin could offer a window to diabetes-related complications on the microvasculature. However, skin microvasculature is typically assessed by histological analysis, which is not suited for applications to large populations or longitudinal studies. We introduce ultra-wideband raster-scan optoacoustic mesoscopy (RSOM) for precise, non-invasive assessment of diabetes-related changes in the dermal microvasculature and skin micro-anatomy, resolved with unprecedented sensitivity and detail without the need for contrast agents. Providing unique imaging contrast, we explored a possible role for RSOM as an investigational tool in diabetes healthcare and offer the first comprehensive study investigating the relationship between different diabetes complications and microvascular features in vivo. We applied RSOM to scan the pretibial area of 95 participants with diabetes mellitus and 48 age-matched volunteers without diabetes, grouped according to disease complications, and extracted six label-free optoacoustic biomarkers of human skin, including dermal microvasculature density and epidermal parameters, based on a novel image-processing pipeline. We then correlated these biomarkers to disease severity and found statistically significant effects on microvasculature parameters as a function of diabetes complications. We discuss how label-free RSOM biomarkers can lead to a quantitative assessment of the systemic effects of diabetes and its complications, complementing the qualitative assessment allowed by current clinical metrics, possibly leading to a precise scoring system that captures the gradual evolution of the disease.
Collapse
Affiliation(s)
- Hailong He
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Nikolina-Alexia Fasoula
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Angelos Karlas
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
- DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany
| | - Murad Omar
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Juan Aguirre
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Jessica Lutz
- Diabetes Center at Marienplatz, Munich, Germany
- Forschergruppe Diabetes e.V., Helmholtz Zentrum München, Neuherberg, Germany
| | - Michael Kallmayer
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Martin Füchtenbusch
- Diabetes Center at Marienplatz, Munich, Germany
- Forschergruppe Diabetes e.V., Helmholtz Zentrum München, Neuherberg, Germany
| | - Hans-Henning Eckstein
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
- DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany
| | - Annette Ziegler
- Forschergruppe Diabetes e.V., Helmholtz Zentrum München, Neuherberg, Germany
- Institute of Diabetes Research, Helmholtz Zentrum München, Neuherberg, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany.
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany.
- DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany.
| |
Collapse
|
2
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|
3
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
4
|
Zekavat SM, Raghu VK, Trinder M, Ye Y, Koyama S, Honigberg MC, Yu Z, Pampana A, Urbut S, Haidermota S, O’Regan DP, Zhao H, Ellinor PT, Segrè AV, Elze T, Wiggs JL, Martone J, Adelman RA, Zebardast N, Del Priore L, Wang JC, Natarajan P. Deep Learning of the Retina Enables Phenome- and Genome-Wide Analyses of the Microvasculature. Circulation 2022; 145:134-150. [PMID: 34743558 PMCID: PMC8746912 DOI: 10.1161/circulationaha.121.057709] [Citation(s) in RCA: 57] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 11/03/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND The microvasculature, the smallest blood vessels in the body, has key roles in maintenance of organ health and tumorigenesis. The retinal fundus is a window for human in vivo noninvasive assessment of the microvasculature. Large-scale complementary machine learning-based assessment of the retinal vasculature with phenome-wide and genome-wide analyses may yield new insights into human health and disease. METHODS We used 97 895 retinal fundus images from 54 813 UK Biobank participants. Using convolutional neural networks to segment the retinal microvasculature, we calculated vascular density and fractal dimension as a measure of vascular branching complexity. We associated these indices with 1866 incident International Classification of Diseases-based conditions (median 10-year follow-up) and 88 quantitative traits, adjusting for age, sex, smoking status, and ethnicity. RESULTS Low retinal vascular fractal dimension and density were significantly associated with higher risks for incident mortality, hypertension, congestive heart failure, renal failure, type 2 diabetes, sleep apnea, anemia, and multiple ocular conditions, as well as corresponding quantitative traits. Genome-wide association of vascular fractal dimension and density identified 7 and 13 novel loci, respectively, that were enriched for pathways linked to angiogenesis (eg, vascular endothelial growth factor, platelet-derived growth factor receptor, angiopoietin, and WNT signaling pathways) and inflammation (eg, interleukin, cytokine signaling). CONCLUSIONS Our results indicate that the retinal vasculature may serve as a biomarker for future cardiometabolic and ocular disease and provide insights into genes and biological pathways influencing microvascular indices. Moreover, such a framework highlights how deep learning of images can quantify an interpretable phenotype for integration with electronic health record, biomarker, and genetic data to inform risk prediction and risk modification.
Collapse
Affiliation(s)
- Seyedeh Maryam Zekavat
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
- Computational Biology & Bioinformatics Program (S.M.Z., Y.Y., H.Z.), Yale University, New Haven, CT
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Vineet K. Raghu
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
- Cardiovascular Imaging Research Center (V.K.R.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Mark Trinder
- Centre for Heart Lung Innovation, University of British Columbia, Vancouver, Canada (M.T.)
| | - Yixuan Ye
- Computational Biology & Bioinformatics Program (S.M.Z., Y.Y., H.Z.), Yale University, New Haven, CT
| | - Satoshi Koyama
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Michael C. Honigberg
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Zhi Yu
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Akhil Pampana
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
| | - Sarah Urbut
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Sara Haidermota
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Declan P. O’Regan
- MRC London Institute of Medical Sciences, Imperial College London, UK (D.P.O.)
| | - Hongyu Zhao
- Computational Biology & Bioinformatics Program (S.M.Z., Y.Y., H.Z.), Yale University, New Haven, CT
- School of Public Health (H.Z.), Yale University, New Haven, CT
| | - Patrick T. Ellinor
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| | - Ayellet V. Segrè
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - Tobias Elze
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - Janey L. Wiggs
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - James Martone
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Ron A. Adelman
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Nazlee Zebardast
- Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston (A.V.S., T.E., J.L.W., N.Z.)
| | - Lucian Del Priore
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Jay C. Wang
- Department of Ophthalmology and Visual Science, Yale School of Medicine, New Haven, CT (S.M.Z., J.M., R.A.A., L.D.P., J.C.W.)
| | - Pradeep Natarajan
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of MIT and Harvard, Cambridge, MA (S.M.Z., V.K.R., M.T., S.K., M.C.H., Z.Y., A.P., S.U., P.T.E., P.N.)
- Cardiovascular Research Center (S.M.Z., V.K.R., M.C.H., S.U., S.H., P.T.E., P.N.), Massachusetts General Hospital, Harvard Medical School, Boston
| |
Collapse
|
5
|
Fasaeiyan N, Soltani M, Moradi Kashkooli F, Taatizadeh E, Rahmim A. Computational modeling of PET tracer distribution in solid tumors integrating microvasculature. BMC Biotechnol 2021; 21:67. [PMID: 34823506 PMCID: PMC8620574 DOI: 10.1186/s12896-021-00725-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 11/05/2021] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND We present computational modeling of positron emission tomography radiotracer uptake with consideration of blood flow and interstitial fluid flow, performing spatiotemporally-coupled modeling of uptake and integrating the microvasculature. In our mathematical modeling, the uptake of fluorodeoxyglucose F-18 (FDG) was simulated based on the Convection-Diffusion-Reaction equation given its high accuracy and reliability in modeling of transport phenomena. In the proposed model, blood flow and interstitial flow are solved simultaneously to calculate interstitial pressure and velocity distribution inside cancer and normal tissues. As a result, the spatiotemporal distribution of the FDG tracer is calculated based on velocity and pressure distributions in both kinds of tissues. RESULTS Interstitial pressure has maximum value in the tumor region compared to surrounding tissue. In addition, interstitial fluid velocity is extremely low in the entire computational domain indicating that convection can be neglected without effecting results noticeably. Furthermore, our results illustrate that the total concentration of FDG in the tumor region is an order of magnitude larger than in surrounding normal tissue, due to lack of functional lymphatic drainage system and also highly-permeable microvessels in tumors. The magnitude of the free tracer and metabolized (phosphorylated) radiotracer concentrations followed very different trends over the entire time period, regardless of tissue type (tumor vs. normal). CONCLUSION Our spatiotemporally-coupled modeling provides helpful tools towards improved understanding and quantification of in vivo preclinical and clinical studies.
Collapse
Affiliation(s)
- Niloofar Fasaeiyan
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Tehran Province, Iran
- Department of Civil Engineering, Polytechnique University, Montreal, QC, Canada
| | - M Soltani
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Tehran Province, Iran.
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada.
- Centre for Biotechnology and Bioengineering (CBB), University of Waterloo, Waterloo, ON, Canada.
- Advanced Bioengineering Initiative Center, Computational Medicine Center, K. N. Toosi University of Technology, Tehran, Tehran Province, Iran.
| | - Farshad Moradi Kashkooli
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Tehran Province, Iran
| | - Erfan Taatizadeh
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Tehran Province, Iran
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Arman Rahmim
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| |
Collapse
|
6
|
Segmentation and Automatic Identification of Vasculature in Coronary Angiograms. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2747274. [PMID: 34659446 PMCID: PMC8516542 DOI: 10.1155/2021/2747274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 08/28/2021] [Accepted: 09/03/2021] [Indexed: 11/24/2022]
Abstract
Coronary angiography is the “gold standard” for the diagnosis of coronary heart disease, of which vessel segmentation and identification technologies are paid much attention to. However, because of the characteristics of coronary angiograms, such as the complex and variable morphology of coronary artery structure and the noise caused by various factors, there are many difficulties in these studies. To conquer these problems, we design a preprocessing scheme including block-matching and 3D filtering, unsharp masking, contrast-limited adaptive histogram equalization, and multiscale image enhancement to improve the quality of the image and enhance the vascular structure. To achieve vessel segmentation, we use the C-V model to extract the vascular contour. Finally, we propose an improved adaptive tracking algorithm to realize automatic identification of the vascular skeleton. According to our experiments, the vascular structures can be successfully highlighted and the background is restrained by the preprocessing scheme, the continuous contour of the vessel is extracted accurately by the C-V model, and it is verified that the proposed tracking method has higher accuracy and stronger robustness compared with the existing adaptive tracking method.
Collapse
|
7
|
Hu X, Wang L, Cheng S, Li Y. HDC-Net: A hierarchical dilation convolutional network for retinal vessel segmentation. PLoS One 2021; 16:e0257013. [PMID: 34492064 PMCID: PMC8423235 DOI: 10.1371/journal.pone.0257013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022] Open
Abstract
The cardinal symptoms of some ophthalmic diseases observed through exceptional retinal blood vessels, such as retinal vein occlusion, diabetic retinopathy, etc. The advanced deep learning models used to obtain morphological and structural information of blood vessels automatically are conducive to the early treatment and initiative prevention of ophthalmic diseases. In our work, we propose a hierarchical dilation convolutional network (HDC-Net) to extract retinal vessels in a pixel-to-pixel manner. It utilizes the hierarchical dilation convolution (HDC) module to capture the fragile retinal blood vessels usually neglected by other methods. An improved residual dual efficient channel attention (RDECA) module can infer more delicate channel information to reinforce the discriminative capability of the model. The structured Dropblock can help our HDC-Net model to solve the network overfitting effectively. From a holistic perspective, the segmentation results obtained by HDC-Net are superior to other deep learning methods on three acknowledged datasets (DRIVE, CHASE-DB1, STARE), the sensitivity, specificity, accuracy, f1-score and AUC score are {0.8252, 0.9829, 0.9692, 0.8239, 0.9871}, {0.8227, 0.9853, 0.9745, 0.8113, 0.9884}, and {0.8369, 0.9866, 0.9751, 0.8385, 0.9913}, respectively. It surpasses most other advanced retinal vessel segmentation models. Qualitative and quantitative analysis demonstrates that HDC-Net can fulfill the task of retinal vessel segmentation efficiently and accurately.
Collapse
Affiliation(s)
- Xiaolong Hu
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Shuli Cheng
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Yongming Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, China
| |
Collapse
|
8
|
Xu R, Liu T, Ye X, Liu F, Lin L, Li L, Tanaka S, Chen YW. Joint Extraction of Retinal Vessels and Centerlines Based on Deep Semantics and Multi-Scaled Cross-Task Aggregation. IEEE J Biomed Health Inform 2021; 25:2722-2732. [PMID: 33320815 DOI: 10.1109/jbhi.2020.3044957] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Retinal vessel segmentation and centerline extraction are crucial steps in building a computer-aided diagnosis system on retinal images. Previous works treat them as two isolated tasks, while ignoring their tight association. In this paper, we propose a deep semantics and multi-scaled cross-task aggregation network that takes advantage of the association to jointly improve their performances. Our network is featured by two sub-networks. The forepart is a deep semantics aggregation sub-network that aggregates strong semantic information to produce more powerful features for both tasks, and the tail is a multi-scaled cross-task aggregation sub-network that explores complementary information to refine the results. We evaluate the proposed method on three public databases, which are DRIVE, STARE and CHASE_DB1. Experimental results show that our method can not only simultaneously extract retinal vessels and their centerlines but also achieve the state-of-the-art performances on both tasks.
Collapse
|
9
|
Han T, Ai D, An R, Fan J, Song H, Wang Y, Yang J. Ordered multi-path propagation for vessel centerline extraction. Phys Med Biol 2021; 66. [PMID: 34157702 DOI: 10.1088/1361-6560/ac0d8e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/22/2021] [Indexed: 11/12/2022]
Abstract
Vessel centerline extraction from x-ray angiography images is essential for vessel structure analysis in the diagnosis of coronary artery disease. However, complete and continuous centerline extraction remains a challenging task due to image noise, poor contrast, and complexity of vessel structure. Thus, an iterative multi-path search framework for automatic vessel centerline extraction is proposed. First, the seed points of the vessel structure are detected and sorted by confidence. With the ordered seed points, multi-bifurcation centerline is searched through multi-path propagation of wavefront and accumulated voting. Finally, the centerline is further extended piecewise by wavefront propagation on the basis of keypoint detection. The latter two steps are performed alternately to obtain the final centerline result. The proposed method is qualitatively and quantitatively evaluated on 1260 synthetic images and 50 clinical angiography images. The results demonstrate that our method has a highF1score of 87.8% ± 2.7% for the angiography images and achieves accurate and continuous results of vessel centerline extraction.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Ruirui An
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, People's Republic of China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
10
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
11
|
Turkmen HI, Karsligil ME, Kocak I. Visible Vessels of Vocal Folds: Can they have a Diagnostic Role? Curr Med Imaging 2020; 15:785-795. [PMID: 32008546 DOI: 10.2174/1573405614666180604083854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 02/16/2018] [Accepted: 02/21/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND Challenges in visual identification of laryngeal disorders lead researchers to investigate new opportunities to help clinical examination. This paper presents an efficient and simple method which extracts and assesses blood vessels on vocal fold tissue in order to serve medical diagnosis. METHODS The proposed vessel segmentation approach has been designed in order to overcome difficulties raised by design specifications of videolaryngostroboscopy and anatomic structure of vocal fold vasculature. The limited number of medical studies on vocal fold vasculature point out that the direction of blood vessels and amount of vasculature are discriminative features for vocal fold disorders. Therefore, we extracted the features of vessels on the basis of these studies. We represent vessels as vascular vectors and suggest a vector field based measurement that quantifies the orientation pattern of blood vessels towards vocal fold pathologies. RESULTS In order to demonstrate the relationship between vessel structure and vocal fold disorders, we performed classification of vocal fold disorders by using only vessel features. A binary tree of Support Vector Machine (SVM) has been exploited for classification. Average recall of proposed vessel extraction method was calculated as 0.82 while healthy, sulcus vocalis, laryngitis classification accuracy of 0.75 was achieved. CONCLUSION Obtained success rates showed the efficiency of vocal fold vessels in serving as an indicator of laryngeal diseases.
Collapse
Affiliation(s)
- Hafiza Irem Turkmen
- Computer Engineering Department, Faculty of Electrical & Electronics Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Mine Elif Karsligil
- Computer Engineering Department, Faculty of Electrical & Electronics Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Ismail Kocak
- Otorhinolaryngology Department, Faculty of Medicine, Okan University, Istanbul, Turkey
| |
Collapse
|
12
|
Khan MA, Akram T, Sharif M, Javed K, Raza M, Saba T. An automated system for cucumber leaf diseased spot detection and classification using improved saliency method and deep features selection. MULTIMEDIA TOOLS AND APPLICATIONS 2020; 79:18627-18656. [DOI: 10.1007/s11042-020-08726-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 08/22/2019] [Accepted: 02/01/2020] [Indexed: 08/25/2024]
|
13
|
Adapa D, Joseph Raj AN, Alisetti SN, Zhuang Z, K. G, Naik G. A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features. PLoS One 2020; 15:e0229831. [PMID: 32142540 PMCID: PMC7059933 DOI: 10.1371/journal.pone.0229831] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/16/2020] [Indexed: 11/18/2022] Open
Abstract
This paper proposes a new supervised method for blood vessel segmentation using Zernike moment-based shape descriptors. The method implements a pixel wise classification by computing a 11-D feature vector comprising of both statistical (gray-level) features and shape-based (Zernike moment) features. Also the feature set contains optimal coefficients of the Zernike Moments which were derived based on the maximum differentiability between the blood vessel and background pixels. A manually selected training points obtained from the training set of the DRIVE dataset, covering all possible manifestations were used for training the ANN-based binary classifier. The method was evaluated on unknown test samples of DRIVE and STARE databases and returned accuracies of 0.945 and 0.9486 respectively, outperforming other existing supervised learning methods. Further, the segmented outputs were able to cover thinner blood vessels better than previous methods, aiding in early detection of pathologies.
Collapse
Affiliation(s)
- Dharmateja Adapa
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Sai Nikhil Alisetti
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Ganesan K.
- TIFAC-CORE, School of Electronics, Vellore Institute of Technology, Vellore, India
| | - Ganesh Naik
- MARCS Institute, Western Sydney University, Australia
| |
Collapse
|
14
|
Khawaja A, Khan TM, Khan MAU, Nawaz SJ. A Multi-Scale Directional Line Detector for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4949. [PMID: 31766276 PMCID: PMC6891360 DOI: 10.3390/s19224949] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 11/02/2019] [Accepted: 11/08/2019] [Indexed: 11/16/2022]
Abstract
The assessment of transformations in the retinal vascular structure has a strong potential in indicating a wide range of underlying ocular pathologies. Correctly identifying the retinal vessel map is a crucial step in disease identification, severity progression assessment, and appropriate treatment. Marking the vessels manually by a human expert is a tedious and time-consuming task, thereby reinforcing the need for automated algorithms capable of quick segmentation of retinal features and any possible anomalies. Techniques based on unsupervised learning methods utilize vessel morphology to classify vessel pixels. This study proposes a directional multi-scale line detector technique for the segmentation of retinal vessels with the prime focus on the tiny vessels that are most difficult to segment out. Constructing a directional line-detector, and using it on images having only the features oriented along the detector's direction, significantly improves the detection accuracy of the algorithm. The finishing step involves a binarization operation, which is again directional in nature, helps in achieving further performance improvements in terms of key performance indicators. The proposed method is observed to obtain a sensitivity of 0.8043, 0.8011, and 0.7974 for the Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), and Child Heart And health Study in England (CHASE_DB1) datasets, respectively. These results, along with other performance enhancements demonstrated by the conducted experimental evaluation, establish the validity and applicability of directional multi-scale line detectors as a competitive framework for retinal image segmentation.
Collapse
Affiliation(s)
- Ahsan Khawaja
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (T.M.K.); (S.J.N.)
| | - Tariq M. Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (T.M.K.); (S.J.N.)
| | | | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan; (T.M.K.); (S.J.N.)
| |
Collapse
|
15
|
Buscema M, Hieber SE, Schulz G, Deyhle H, Hipp A, Beckmann F, Lobrinus JA, Saxer T, Müller B. Ex vivo evaluation of an atherosclerotic human coronary artery via histology and high-resolution hard X-ray tomography. Sci Rep 2019; 9:14348. [PMID: 31586080 PMCID: PMC6778097 DOI: 10.1038/s41598-019-50711-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 09/16/2019] [Indexed: 12/12/2022] Open
Abstract
Atherosclerotic arteries exhibit characteristic constrictions and substantial deviations from cylindrical shape. Therefore, determining the artery's cross-section along the centerline is challenging, although high-resolution isotropic three-dimensional data are available. Herein, we apply high-resolution computed tomography in absorption and phase to a plaque-containing human artery post-mortem, through the course of the preparation stages for histology. We identify the impact of paraffin embedding and decalcification on the artery lumen. For automatic extraction of lumen's cross-section along centerline we present a dedicated pipeline. Comparing fixated tissue before and after paraffin embedding gives rise to shape changes with lumen reduction to 50-80%. The histological slicing induces further deformations with respect to tomography. Data acquired after decalcification show debris unintentionally distributed within the vessel preventing the reliable automatic lumen segmentation. Comparing tomography of laboratory- and synchrotron-radiation-based X rays by means of joint histogram analysis leads us to conclude that advanced desktop tomography is capable of quantifying the artery's lumen as an essential input for blood flow simulations. The results indicate that the most reliable lumen quantification is achieved by imaging the non-decalcified specimen fixed in formalin, using phase contrast modality and a dedicated processing pipeline. This study focusses on a methodology to quantitatively evaluate diseased artery segments post-mortem and provides unique structural parameters on the treatment-induced local shrinkage, which will be the basis of future studies on the flow in vessels affected by constrictions.
Collapse
Affiliation(s)
- Marzia Buscema
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Simone E Hieber
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.
| | - Georg Schulz
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Hans Deyhle
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Alexander Hipp
- Institute of Materials Research, Helmholtz-Zentrum Geesthacht, Geesthacht, Germany
| | - Felix Beckmann
- Institute of Materials Research, Helmholtz-Zentrum Geesthacht, Geesthacht, Germany
| | | | - Till Saxer
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Bert Müller
- Biomaterials Science Center, Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.
| |
Collapse
|
16
|
Noh KJ, Park SJ, Lee S. Scale-space approximated convolutional neural networks for retinal vessel segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:237-246. [PMID: 31416552 DOI: 10.1016/j.cmpb.2019.06.030] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 06/15/2019] [Accepted: 06/28/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal fundus images are widely used to diagnose retinal diseases and can potentially be used for early diagnosis and prevention of chronic vascular diseases and diabetes. While various automatic retinal vessel segmentation methods using deep learning have been proposed, they are mostly based on common CNN structures developed for other tasks such as classification. METHODS We present a novel and simple multi-scale convolutional neural network (CNN) structure for retinal vessel segmentation. We first provide a theoretical analysis of existing multi-scale structures based on signal processing. In previous structures, multi-scale representations are achieved through downsampling by subsampling and decimation. By incorporating scale-space theory, we propose a simple yet effective multi-scale structure for CNNs using upsampling, which we term scale-space approximated CNN (SSANet). Based on further analysis of the effects of the SSA structure within a CNN, we also incorporate residual blocks, resulting in a multi-scale CNN that outperforms current state-of-the-art methods. RESULTS Quantitative evaluations are presented as the area-under-curve (AUC) of the receiver operating characteristic (ROC) curve and the precision-recall curve, as well as accuracy, for four publicly available datasets, namely DRIVE, STARE, CHASE_DB1, and HRF. For the CHASE_DB1 set, the SSANet achieves state-of-the-art AUC value of 0.9916 for the ROC curve. An ablative analysis is presented to analyze the contribution of different components of the SSANet to the performance improvement. CONCLUSIONS The proposed retinal SSANet achieves state-of-the-art or comparable accuracy across publicly available datasets, especially improving segmentation for thin vessels, vessel junctions, and central vessel reflexes.
Collapse
Affiliation(s)
- Kyoung Jin Noh
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Gyeonggi-do 13620, South Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Gyeonggi-do 13620, South Korea.
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seongbuk-gu, Seoul 02707, South Korea.
| |
Collapse
|
17
|
Gong C, Erichson NB, Kelly JP, Trutoiu L, Schowengerdt BT, Brunton SL, Seibel EJ. RetinaMatch: Efficient Template Matching of Retina Images for Teleophthalmology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1993-2004. [PMID: 31217098 DOI: 10.1109/tmi.2019.2923466] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal template matching and registration is an important challenge in teleophthalmology with low-cost imaging devices. However, the images from such devices generally have a small field of view (FOV) and image quality degradations, making matching difficult. In this paper, we develop an efficient and accurate retinal matching technique that combines dimension reduction and mutual information (MI), called RetinaMatch. The dimension reduction initializes the MI optimization as a coarse localization process, which narrows the optimization domain and avoids local optima. The effectiveness of RetinaMatch is demonstrated on the open fundus image database STARE with simulated reduced FOV and anticipated degradations, and on retinal images acquired by adapter-based optics attached to a smartphone. RetinaMatch achieves a success rate over 94% on human retinal images with the matched target registration errors below 2 pixels on average, excluding the observer variability, outperforming standard template matching solutions. In the application of measuring vessel diameter repeatedly, single pixel errors are expected. In addition, our method can be used in the process of image mosaicking with area-based registration, providing a robust approach when feature-based methods fail. To the best of our knowledge, this is the first template matching algorithm for retina images with small template images from unconstrained retinal areas. In the context of the emerging mixed reality market, we envision automated retinal image matching and registration methods as transformative for advanced teleophthalmology and long-term retinal monitoring.
Collapse
|
18
|
Multilevel and Multiscale Deep Neural Network for Retinal Blood Vessel Segmentation. Symmetry (Basel) 2019. [DOI: 10.3390/sym11070946] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Retinal blood vessel segmentation influences a lot of blood vessel-related disorders such as diabetic retinopathy, hypertension, cardiovascular and cerebrovascular disorders, etc. It is found that vessel segmentation using a convolutional neural network (CNN) showed increased accuracy in feature extraction and vessel segmentation compared to the classical segmentation algorithms. CNN does not need any artificial handcrafted features to train the network. In the proposed deep neural network (DNN), a better pre-processing technique and multilevel/multiscale deep supervision (DS) layers are being incorporated for proper segmentation of retinal blood vessels. From the first four layers of the VGG-16 model, multilevel/multiscale deep supervision layers are formed by convolving vessel-specific Gaussian convolutions with two different scale initializations. These layers output the activation maps that are capable to learn vessel-specific features at multiple scales, levels, and depth. Furthermore, the receptive field of these maps is increased to obtain the symmetric feature maps that provide the refined blood vessel probability map. This map is completely free from the optic disc, boundaries, and non-vessel background. The segmented results are tested on Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), High-Resolution Fundus (HRF), and real-world retinal datasets to evaluate its performance. This proposed model achieves better sensitivity values of 0.8282, 0.8979 and 0.8655 in DRIVE, STARE and HRF datasets with acceptable specificity and accuracy performance metrics.
Collapse
|
19
|
Jeelani H, Liang H, Acton ST, Weller DS. Content-Aware Enhancement of Images With Filamentous Structures. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3451-3461. [PMID: 30716037 PMCID: PMC6538482 DOI: 10.1109/tip.2019.2897289] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
In this paper, we describe a novel enhancement method for images containing filamentous structures. Our method combines a gradient sparsity constraint with a filamentous structure constraint for the effective removal of clutter and noise from the background. The method is applied and evaluated on three types of data: 1) confocal microscopy images of neurons; 2) calcium imaging data; and 3) images of road pavement. We found that the images enhanced by our method preserve both the structure and the intensity details of the original object. In the case of neuron microscopy, we find that the neurons enhanced by our method are better correlated with the original structure intensities than the neurons enhanced by well-known vessel enhancement methods. Experiments on simulated calcium imaging data indicate that both the number of detected neurons and the accuracy of the derived calcium activity are improved. Applying our method to real calcium data, more regions exhibiting calcium activity in the full field of view were found. In road pavement crack detection, smaller or milder cracks were detected after using our enhancement method.
Collapse
|
20
|
Hemelings R, Elen B, Stalmans I, Van Keer K, De Boever P, Blaschko MB. Artery-vein segmentation in fundus images using a fully convolutional network. Comput Med Imaging Graph 2019; 76:101636. [PMID: 31288217 DOI: 10.1016/j.compmedimag.2019.05.004] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 05/18/2019] [Accepted: 05/24/2019] [Indexed: 10/26/2022]
Abstract
Epidemiological studies demonstrate that dimensions of retinal vessels change with ocular diseases, coronary heart disease and stroke. Different metrics have been described to quantify these changes in fundus images, with arteriolar and venular calibers among the most widely used. The analysis often includes a manual procedure during which a trained grader differentiates between arterioles and venules. This step can be time-consuming and can introduce variability, especially when large volumes of images need to be analyzed. In light of the recent successes of fully convolutional networks (FCNs) applied to biomedical image segmentation, we assess its potential in the context of retinal artery-vein (A/V) discrimination. To the best of our knowledge, a deep learning (DL) architecture for simultaneous vessel extraction and A/V discrimination has not been previously employed. With the aim of improving the automation of vessel analysis, a novel application of the U-Net semantic segmentation architecture (based on FCNs) on the discrimination of arteries and veins in fundus images is presented. By utilizing DL, results are obtained that exceed accuracies reported in the literature. Our model was trained and tested on the public DRIVE and HRF datasets. For DRIVE, measuring performance on vessels wider than two pixels, the FCN achieved accuracies of 94.42% and 94.11% on arteries and veins, respectively. This represents a decrease in error of 25% over the previous state of the art reported by Xu et al. (2017). Additionally, we introduce the HRF A/V ground truth, on which our model achieves 96.98% accuracy on all discovered centerline pixels. HRF A/V ground truth validated by an ophthalmologist, predicted A/V annotations and evaluation code are available at https://github.com/rubenhx/av-segmentation.
Collapse
Affiliation(s)
- Ruben Hemelings
- Research Group Ophthalmology, KU Leuven, Kapucijnenvoer 33, 3000 Leuven, Belgium; ESAT-PSI, KU Leuven, Kasteelpark Arenberg 10, 3001 Leuven, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium
| | - Bart Elen
- VITO NV, Boeretang 200, 2400 Mol, Belgium
| | - Ingeborg Stalmans
- Research Group Ophthalmology, KU Leuven, Kapucijnenvoer 33, 3000 Leuven, Belgium
| | - Karel Van Keer
- Research Group Ophthalmology, KU Leuven, Kapucijnenvoer 33, 3000 Leuven, Belgium
| | - Patrick De Boever
- Hasselt University, Agoralaan building D, 3590 Diepenbeek, Belgium; VITO NV, Boeretang 200, 2400 Mol, Belgium.
| | | |
Collapse
|
21
|
Girard F, Kavalec C, Cheriet F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif Intell Med 2019; 94:96-109. [DOI: 10.1016/j.artmed.2019.02.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Revised: 08/09/2018] [Accepted: 02/17/2019] [Indexed: 11/17/2022]
|
22
|
Akbar S, Sharif M, Akram MU, Saba T, Mahmood T, Kolivand M. Automated techniques for blood vessels segmentation through fundus retinal images: A review. Microsc Res Tech 2019; 82:153-170. [DOI: 10.1002/jemt.23172] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2018] [Revised: 09/26/2018] [Accepted: 10/17/2018] [Indexed: 11/09/2022]
Affiliation(s)
- Shahzad Akbar
- Department of Computer ScienceCOMSATS University Islamabad, Wah Campus Wah Pakistan
| | - Muhammad Sharif
- Department of Computer ScienceCOMSATS University Islamabad, Wah Campus Wah Pakistan
| | - Muhammad Usman Akram
- Department of Computer EngineeringCollege of E&ME, National University of Sciences and Technology Islamabad Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Toqeer Mahmood
- Department of Computer ScienceUniversity of Engineering and Technology Taxila Pakistan
| | | |
Collapse
|
23
|
Badawi SA, Fraz MM. Optimizing the trainable B-COSFIRE filter for retinal blood vessel segmentation. PeerJ 2018; 6:e5855. [PMID: 30479888 PMCID: PMC6238769 DOI: 10.7717/peerj.5855] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 09/28/2018] [Indexed: 11/20/2022] Open
Abstract
Segmentation of the retinal blood vessels using filtering techniques is a widely used step in the development of an automated system for diagnostic retinal image analysis. This paper optimized the blood vessel segmentation, by extending the trainable B-COSFIRE filter via identification of more optimal parameters. The filter parameters are introduced using an optimization procedure to three public datasets (STARE, DRIVE, and CHASE-DB1). The suggested approach considers analyzing thresholding parameters selection followed by application of background artifacts removal techniques. The approach results are better than the other state of the art methods used for vessel segmentation. ANOVA analysis technique is also used to identify the most significant parameters that are impacting the performance results (p-value ¡ 0.05). The proposed enhancement has improved the vessel segmentation accuracy in DRIVE, STARE and CHASE-DB1 to 95.47, 95.30 and 95.30, respectively.
Collapse
Affiliation(s)
- Sufian A. Badawi
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan
| | - Muhammad Moazam Fraz
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
24
|
Khan KB, Khaliq AA, Jalil A, Iftikhar MA, Ullah N, Aziz MW, Ullah K, Shahid M. A review of retinal blood vessels extraction techniques: challenges, taxonomy, and future trends. Pattern Anal Appl 2018. [DOI: 10.1007/s10044-018-0754-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
25
|
Jiang Z, Zhang H, Wang Y, Ko SB. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput Med Imaging Graph 2018; 68:1-15. [DOI: 10.1016/j.compmedimag.2018.04.005] [Citation(s) in RCA: 103] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Revised: 04/10/2018] [Accepted: 04/13/2018] [Indexed: 11/25/2022]
|
26
|
Yan Z, Yang X, Cheng KT. A Skeletal Similarity Metric for Quality Evaluation of Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1045-1057. [PMID: 29610081 DOI: 10.1109/tmi.2017.2778748] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The most commonly used evaluation metrics for quality assessment of retinal vessel segmentation are sensitivity, specificity, and accuracy, which are based on pixel-to-pixel matching. However, due to the inter-observer problem that vessels annotated by different observers vary in both thickness and location, pixel-to-pixel matching is too restrictive to fairly evaluate the results of vessel segmentation. In this paper, the proposed skeletal similarity metric is constructed by comparing the skeleton maps generated from the reference and the source vessel segmentation maps. To address the inter-observer problem, instead of using a pixel-to-pixel matching strategy, each skeleton segment in the reference skeleton map is adaptively assigned with a searching range whose radius is determined based on its vessel thickness. Pixels in the source skeleton map located within the searching range are then selected for similarity calculation. The skeletal similarity consists of a curve similarity, which measures the structural similarity between the reference and the source skeleton maps and a thickness similarity, which measures the thickness consistency between the reference and the source vessel segmentation maps. In contrast to other metrics that provide a global score for the overall performance, we modify the definitions of true positive, false negative, true negative, and false positive based on the skeletal similarity, based on which sensitivity, specificity, accuracy, and other objective measurements can be constructed. More importantly, the skeletal similarity metric has better potential to be used as a pixelwise loss function for training deep learning models for retinal vessel segmentation. Through comparison of a set of examples, we demonstrate that the redefined metrics based on the skeletal similarity are more effective for quality evaluation, especially with greater tolerance to the inter-observer problem.
Collapse
|
27
|
Pathan S, Siddalingaswamy PC, Prabhu KG. A pixel processing approach for retinal vessel extraction using modified Gabor functions. PROGRESS IN ARTIFICIAL INTELLIGENCE 2018. [DOI: 10.1007/s13748-017-0134-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
Retinal Vessels Segmentation Techniques and Algorithms: A Survey. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8020155] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
29
|
Memari N, Ramli AR, Bin Saripan MI, Mashohor S, Moghbel M. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier. PLoS One 2017; 12:e0188939. [PMID: 29228036 PMCID: PMC5724901 DOI: 10.1371/journal.pone.0188939] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Accepted: 11/15/2017] [Indexed: 11/19/2022] Open
Abstract
The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE) method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the Retina (STARE) and Child Heart and Health Study in England (CHASE_DB1) datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.
Collapse
Affiliation(s)
- Nogol Memari
- Department of Computer & Communication Systems, Faculty of Engineering, University Putra Malaysia, Serdang, Selangor, Malaysia
- * E-mail:
| | - Abd Rahman Ramli
- Department of Computer & Communication Systems, Faculty of Engineering, University Putra Malaysia, Serdang, Selangor, Malaysia
| | - M. Iqbal Bin Saripan
- Department of Computer & Communication Systems, Faculty of Engineering, University Putra Malaysia, Serdang, Selangor, Malaysia
| | - Syamsiah Mashohor
- Department of Computer & Communication Systems, Faculty of Engineering, University Putra Malaysia, Serdang, Selangor, Malaysia
| | - Mehrdad Moghbel
- Department of Computer & Communication Systems, Faculty of Engineering, University Putra Malaysia, Serdang, Selangor, Malaysia
| |
Collapse
|
30
|
Ma J, Jiang J, Liu C, Li Y. Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.07.010] [Citation(s) in RCA: 108] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Jiang H, Ma H, Qian W, Gao M, Li Y. An Automatic Detection System of Lung Nodule Based on Multigroup Patch-Based Deep Learning Network. IEEE J Biomed Health Inform 2017; 22:1227-1237. [PMID: 28715341 DOI: 10.1109/jbhi.2017.2725903] [Citation(s) in RCA: 95] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer-aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography image transformation, the lung nodule segmentation, and the feature extraction, to construct a whole CADe system. It is difficult for these schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multigroup patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multigroup patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Collapse
|
32
|
An automatic and efficient coronary arteries extraction method in CT angiographies. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2017.04.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
33
|
Bulant CA, Blanco PJ, Müller LO, Scharfstein J, Svensjö E. Computer-aided quantification of microvascular networks: Application to alterations due to pathological angiogenesis in the hamster. Microvasc Res 2017; 112:53-64. [DOI: 10.1016/j.mvr.2017.03.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2016] [Revised: 03/04/2017] [Accepted: 03/11/2017] [Indexed: 12/25/2022]
|
34
|
Soomro TA, Gao J, Khan T, Hani AFM, Khan MAU, Paul M. Computerised approaches for the detection of diabetic retinopathy using retinal fundus images: a survey. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0630-y] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
35
|
Zhang M, Hwang TS, Dongye C, Wilson DJ, Huang D, Jia Y. Automated Quantification of Nonperfusion in Three Retinal Plexuses Using Projection-Resolved Optical Coherence Tomography Angiography in Diabetic Retinopathy. Invest Ophthalmol Vis Sci 2017; 57:5101-5106. [PMID: 27699408 PMCID: PMC5054727 DOI: 10.1167/iovs.16-19776] [Citation(s) in RCA: 98] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Purpose The purpose of this study was to evaluate an automated algorithm for detecting avascular area (AA) in optical coherence tomography angiograms (OCTAs) separated into three individual plexuses using a projection-resolved technique. Methods A 3 × 3 mm macular OCTA was obtained in 13 healthy and 13 mild nonproliferative diabetic retinopathy (NPDR) participants. A projection-resolved algorithm segmented OCTA into three vascular plexuses: superficial, intermediate, and deep. An automated algorithm detected AA in each of the three plexuses that were segmented and in the combined inner-retinal angiograms. We assessed the diagnostic accuracy of extrafoveal and total AA using segmented and combined angiograms, the agreement between automated and manual detection of AA, and the within-visit repeatability. Results The sum of extrafoveal AA from the segmented angiograms was larger in the NPDR group by 0.17 mm2 (P < 0.001) and detected NPDR with 94.6% sensitivity (area under the receiver operating characteristic curve [AROC] = 0.99). In the combined inner-retinal angiograms, the extrafoveal AA was larger in the NPDR group by 0.01 mm2 (P = 0.168) and detected NPDR with 26.9% sensitivity (AROC = 0.62). The total AA, inclusive of the foveal avascular zone, in the segmented and combined angiograms, detected NPDR with 23.1% and 7.7% sensitivity, respectively. The agreement between the manual and automated detection of AA had a Jaccard index of >0.8. The pooled SDs of AA were small compared with the difference in mean for control and NPDR groups. Conclusions An algorithm to detect AA in OCTA separated into three individual plexuses using a projection-resolved algorithm accurately distinguishes mild NPDR from control eyes. Automatically detected AA agrees with manual delineation and is highly repeatable.
Collapse
Affiliation(s)
- Miao Zhang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States
| | - Thomas S Hwang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States
| | - Changlei Dongye
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States 2College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao, China
| | - David J Wilson
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States
| | - David Huang
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, United States
| |
Collapse
|
36
|
Retinal Image Denoising via Bilateral Filter with a Spatial Kernel of Optimally Oriented Line Spread Function. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:1769834. [PMID: 28261320 PMCID: PMC5316463 DOI: 10.1155/2017/1769834] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Revised: 11/30/2016] [Accepted: 12/13/2016] [Indexed: 11/18/2022]
Abstract
Filtering belongs to the most fundamental operations of retinal image processing and for which the value of the filtered image at a given location is a function of the values in a local window centered at this location. However, preserving thin retinal vessels during the filtering process is challenging due to vessels' small area and weak contrast compared to background, caused by the limited resolution of imaging and less blood flow in the vessel. In this paper, we present a novel retinal image denoising approach which is able to preserve the details of retinal vessels while effectively eliminating image noise. Specifically, our approach is carried out by determining an optimal spatial kernel for the bilateral filter, which is represented by a line spread function with an orientation and scale adjusted adaptively to the local vessel structure. Moreover, this approach can also be served as a preprocessing tool for improving the accuracy of the vessel detection technique. Experimental results show the superiority of our approach over state-of-the-art image denoising techniques such as the bilateral filter.
Collapse
|
37
|
Rezaee K, Haddadnia J, Tashk A. Optimized clinical segmentation of retinal blood vessels by using combination of adaptive filtering, fuzzy entropy and skeletonization. Appl Soft Comput 2017. [DOI: 10.1016/j.asoc.2016.09.033] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
38
|
Almasi S, Ben-Zvi A, Lacoste B, Gu C, Miller EL, Xu X. Joint volumetric extraction and enhancement of vasculature from low-SNR 3-D fluorescence microscopy images. PATTERN RECOGNITION 2017; 63:710-718. [PMID: 28566796 PMCID: PMC5446895 DOI: 10.1016/j.patcog.2016.09.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
To simultaneously overcome the challenges imposed by the nature of optical imaging characterized by a range of artifacts including space-varying signal to noise ratio (SNR), scattered light, and non-uniform illumination, we developed a novel method that segments the 3-D vasculature directly from original fluorescence microscopy images eliminating the need for employing pre- and post-processing steps such as noise removal and segmentation refinement as used with the majority of segmentation techniques. Our method comprises two initialization and constrained recovery and enhancement stages. The initialization approach is fully automated using features derived from bi-scale statistical measures and produces seed points robust to non-uniform illumination, low SNR, and local structural variations. This algorithm achieves the goal of segmentation via design of an iterative approach that extracts the structure through voting of feature vectors formed by distance, local intensity gradient, and median measures. Qualitative and quantitative analysis of the experimental results obtained from synthetic and real data prove the effcacy of this method in comparison to the state-of-the-art enhancing-segmenting methods. The algorithmic simplicity, freedom from having a priori probabilistic information about the noise, and structural definition gives this algorithm a wide potential range of applications where i.e. structural complexity significantly complicates the segmentation problem.
Collapse
Affiliation(s)
- Sepideh Almasi
- Department of Electrical and Computer Engineering, Tufts University, Medford, MA, USA
| | - Ayal Ben-Zvi
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Department of Developmental Biology and Cancer Research, Institute for Medical Research IMRIC, Hebrew University of Jerusalem, Israel
| | - Baptiste Lacoste
- Department of Cellular and Molecular Medicine, University of Ottawa Brain and Mind Research Institute, The Ottawa Hospital Research Institute, Neuroscience Program, Ottawa, ON, Canada
| | - Chenghua Gu
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Eric L. Miller
- Department of Electrical and Computer Engineering, Tufts University, Medford, MA, USA
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women’s Hospital, Boston, MA, USA
| |
Collapse
|
39
|
Ram S, Danford F, Howerton S, Rodriguez JJ, Geest JPV. Three-Dimensional Segmentation of the Ex-Vivo Anterior Lamina Cribrosa From Second-Harmonic Imaging Microscopy. IEEE Trans Biomed Eng 2017; 65:1617-1629. [PMID: 28252388 DOI: 10.1109/tbme.2017.2674521] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The lamina cribrosa (LC) is a connective tissue in the posterior eye with a complex mesh-like trabecular microstructure, through which all the retinal ganglion cell axons and central retinal vessels pass. Recent studies have demonstrated that changes in the structure of the LC correlate with glaucomatous damage. Thus, accurate segmentation and reconstruction of the LC is of utmost importance. This paper presents a new automated method for segmenting the microstructure of the anterior LC in the images obtained via multiphoton microscopy using a combination of ideas. In order to reduce noise, we first smooth the input image using a 4-D collaborative filtering scheme. Next, we enhance the beam-like trabecular microstructure of the LC using wavelet multiresolution analysis. The enhanced LC microstructure is then automatically extracted using a combination of histogram thresholding and graph-cut binarization. Finally, we use morphological area opening as a postprocessing step to remove the small and unconnected 3-D regions in the binarized images. The performance of the proposed method is evaluated using mutual overlap accuracy, Tanimoto index, F-score, and Rand index. Quantitative and qualitative results show that the proposed algorithm provides improved segmentation accuracy and computational efficiency compared to the other recent algorithms.
Collapse
|
40
|
Noise-estimation-based anisotropic diffusion approach for retinal blood vessel segmentation. Neural Comput Appl 2017. [DOI: 10.1007/s00521-016-2811-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
41
|
Vostatek P, Claridge E, Uusitalo H, Hauta-Kasari M, Fält P, Lensu L. Performance comparison of publicly available retinal blood vessel segmentation methods. Comput Med Imaging Graph 2017; 55:2-12. [DOI: 10.1016/j.compmedimag.2016.07.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Revised: 07/18/2016] [Accepted: 07/21/2016] [Indexed: 10/21/2022]
|
42
|
Kaur J, Mittal D. A generalized method for the detection of vascular structure in pathological retinal images. Biocybern Biomed Eng 2017. [DOI: 10.1016/j.bbe.2016.09.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
43
|
|
44
|
Subudhi A, Pattnaik S, Sabut S. Blood vessel extraction of diabetic retinopathy using optimized enhanced images and matched filter. J Med Imaging (Bellingham) 2016; 3:044003. [PMID: 27981066 DOI: 10.1117/1.jmi.3.4.044003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Accepted: 11/04/2016] [Indexed: 11/14/2022] Open
Abstract
Accurate extraction of structural changes in the blood vessels of the retina is an essential task in diagnosis of retinopathy. Matched filter (MF) technique is the effective way to extract blood vessels, but the effectiveness is reduced due to noisy images. The concept of MF and MF with first-order derivative of Gaussian (MF-FDOG) has been implemented for retina images of the DRIVE database. The optimized particle swarm optimization (PSO) algorithm is used for enhancing the images by edgels to improve the performance of filters. The vessels were detected by the response of thresholding to the MF, whereas the threshold is adjusted in response to the FDOG. The PSO-based enhanced MF response significantly improved the performances of filters to extract fine blood vessels structures. Experimental results show that the proposed method based on enhanced images improved the accuracy to 91.1%, which is higher than that of MF and MF-FDOG, respectively. The peak signal-to-noise ratio was also found to be higher with low mean square error values in enhanced MF response. The accuracy, sensitivity, and specificity values are significantly improved among MF, MF-FDOG, and PSO-enhanced images ([Formula: see text]).
Collapse
Affiliation(s)
- Asit Subudhi
- SOA University , Department of Electronics and Communication Engineering, Institute of Technical Education and Research, Bhubaneswar, Odisha, India
| | - Subhra Pattnaik
- SOA University , Department of Electronics and Communication Engineering, Institute of Technical Education and Research, Bhubaneswar, Odisha, India
| | - Sukanta Sabut
- SOA University , Department of Electronics and Instrumentation Engineering, Institute of Technical Education and Research, Bhubaneswar, Odisha, India
| |
Collapse
|
45
|
Christodoulidis A, Hurtut T, Tahar HB, Cheriet F. A multi-scale tensor voting approach for small retinal vessel segmentation in high resolution fundus images. Comput Med Imaging Graph 2016; 52:28-43. [DOI: 10.1016/j.compmedimag.2016.06.001] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Revised: 04/16/2016] [Accepted: 06/01/2016] [Indexed: 11/29/2022]
|
46
|
DiLorenzo T, Ligon L, Drew D. Determination of Statistical Properties of Microtubule Populations. ACTA ACUST UNITED AC 2016; 7:1456-1475. [PMID: 31123623 PMCID: PMC6528678 DOI: 10.4236/am.2016.713125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Microtubules are structures within the cell that form a transportation network along which motor proteins tow cargo to destinations. To establish and maintain a structure capable of serving the cell’s tasks, microtubules undergo deconstruction and reconstruction regularly. This change in structure is critical to tasks like wound repair and cell motility. Images of fluorescing microtubule networks are captured in grayscale at different wavelengths, displaying different tagged proteins. The analysis of these polymeric structures involves identifying the presence of the protein and the direction of the structure in which it resides. This study considers the problem of finding statistical properties of sections of microtubules. We consider the research done on directional filters and utilize a basic solution to find the center of a ridge. The method processes the captured image by centering a circle around pre-determined pixel locations so that the highest possible average pixel intensity is found within the circle, thus marking the center of the microtubule. The location of these centers allows us to estimate angular direction and curvature of the microtubules, statistically estimate the direction of microtubules in a region of the cell, and compare properties of different types of microtubule networks in the same region. To verify accuracy, we study the results of the method on a test image.
Collapse
Affiliation(s)
- Tyson DiLorenzo
- Department of Mathematics, Rensselaer Polytechnic Institute, Troy, USA
| | - Lee Ligon
- Department of Biological Sciences and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, USA
| | - Donald Drew
- Department of Mathematics, Rensselaer Polytechnic Institute, Troy, USA
| |
Collapse
|
47
|
Chen B, Chen Y, Shao Z, Tong T, Luo L. Blood vessel enhancement via multi-dictionary and sparse coding: Application to retinal vessel enhancing. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.03.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
48
|
Sironi A, Turetken E, Lepetit V, Fua P. Multiscale Centerline Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2016; 38:1327-1341. [PMID: 27295457 DOI: 10.1109/tpami.2015.2462363] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Finding the centerline and estimating the radius of linear structures is a critical first step in many applications, ranging from road delineation in 2D aerial images to modeling blood vessels, lung bronchi, and dendritic arbors in 3D biomedical image stacks. Existing techniques rely either on filters designed to respond to ideal cylindrical structures or on classification techniques. The former tend to become unreliable when the linear structures are very irregular while the latter often has difficulties distinguishing centerline locations from neighboring ones, thus losing accuracy. We solve this problem by reformulating centerline detection in terms of a regression problem. We first train regressors to return the distances to the closest centerline in scale-space, and we apply them to the input images or volumes. The centerlines and the corresponding scale then correspond to the regressors local maxima, which can be easily identified. We show that our method outperforms state-of-the-art techniques for various 2D and 3D datasets. Moreover, our approach is very generic and also performs well on contour detection. We show an improvement above recent contour detection algorithms on the BSDS500 dataset.
Collapse
|
49
|
Liu C, Ma J, Ma Y, Huang J. Retinal image registration via feature-guided Gaussian mixture model. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:1267-1276. [PMID: 27409682 DOI: 10.1364/josaa.33.001267] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Registration of retinal images taken at different times, from different perspectives, or with different modalities is a critical prerequisite for the diagnoses and treatments of various eye diseases. This problem can be formulated as registration of two sets of sparse feature points extracted from the given images, and it is typically solved by first creating a set of putative correspondences and then removing the false matches as well as estimating the spatial transformation between the image pairs or solved by estimating the correspondence and transformation jointly involving an iteration process. However, the former strategy suffers from missing true correspondences, and the latter strategy does not make full use of local appearance information, which may be problematic for low-quality retinal images due to a lack of reliable features. In this paper, we propose a feature-guided Gaussian mixture model (GMM) to address these issues. We formulate point registration as the estimation of a feature-guided mixture of densities: A GMM is fitted to one point set, such that both the centers and local features of the Gaussian densities are constrained to coincide with the other point set. The problem is solved under a unified maximum-likelihood framework together with an iterative expectation-maximization algorithm initialized by the confident feature correspondences, where the image transformation is modeled by an affine function. Extensive experiments on various retinal images show the robustness of our approach, which consistently outperforms other state-of-the-art methods, especially when the data is badly degraded.
Collapse
|
50
|
Soares I, Castelo-Branco M, Pinheiro AMG. Optic Disc Localization in Retinal Images Based on Cumulative Sum Fields. IEEE J Biomed Health Inform 2016; 20:574-85. [DOI: 10.1109/jbhi.2015.2392712] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|