1
|
van der Schot A, Sikkel E, Niekolaas M, Spaanderman M, de Jong G. Placental Vessel Segmentation Using Pix2pix Compared to U-Net. J Imaging 2023; 9:226. [PMID: 37888333 PMCID: PMC10607321 DOI: 10.3390/jimaging9100226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/21/2023] [Accepted: 10/02/2023] [Indexed: 10/28/2023] Open
Abstract
Computer-assisted technologies have made significant progress in fetoscopic laser surgery, including placental vessel segmentation. However, the intra- and inter-procedure variabilities in the state-of-the-art segmentation methods remain a significant hurdle. To address this, we investigated the use of conditional generative adversarial networks (cGANs) for fetoscopic image segmentation and compared their performance with the benchmark U-Net technique for placental vessel segmentation. Two deep-learning models, U-Net and pix2pix (a popular cGAN model), were trained and evaluated using a publicly available dataset and an internal validation set. The overall results showed that the pix2pix model outperformed the U-Net model, with a Dice score of 0.80 [0.70; 0.86] versus 0.75 [0.0.60; 0.84] (p-value < 0.01) and an Intersection over Union (IoU) score of 0.70 [0.61; 0.77] compared to 0.66 [0.53; 0.75] (p-value < 0.01), respectively. The internal validation dataset further validated the superiority of the pix2pix model, achieving Dice and IoU scores of 0.68 [0.53; 0.79] and 0.59 [0.49; 0.69] (p-value < 0.01), respectively, while the U-Net model obtained scores of 0.53 [0.49; 0.64] and 0.49 [0.17; 0.56], respectively. This study successfully compared U-Net and pix2pix models for placental vessel segmentation in fetoscopic images, demonstrating improved results with the cGAN-based approach. However, the challenge of achieving generalizability still needs to be addressed.
Collapse
Affiliation(s)
- Anouk van der Schot
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Esther Sikkel
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Marèll Niekolaas
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Marc Spaanderman
- Obstetrics & Gynecology, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
- Obstetrics & Gynecology, Maastricht University Medical Center, 6229 ER Maastricht, The Netherlands
- Department of GROW, School for Oncology and Reproduction, Maastricht University, 6229 ER Maastricht, The Netherlands
| | - Guido de Jong
- 3D Lab, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| |
Collapse
|
2
|
Huang Y, Deng T. Multi-level spatial-temporal and attentional information deep fusion network for retinal vessel segmentation. Phys Med Biol 2023; 68:195026. [PMID: 37567227 DOI: 10.1088/1361-6560/acefa0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective.Automatic segmentation of fundus vessels has the potential to enhance the judgment ability of intelligent disease diagnosis systems. Even though various methods have been proposed, it is still a demanding task to accurately segment the fundus vessels. The purpose of our study is to develop a robust and effective method to segment the vessels in human color retinal fundus images.Approach.We present a novel multi-level spatial-temporal and attentional information deep fusion network for the segmentation of retinal vessels, called MSAFNet, which enhances segmentation performance and robustness. Our method utilizes the multi-level spatial-temporal encoding module to obtain spatial-temporal information and the Self-Attention module to capture feature correlations in different levels of our network. Based on the encoder and decoder structure, we combine these features to get the final segmentation results.Main results.Through abundant experiments on four public datasets, our method achieves preferable performance compared with other SOTA retinal vessel segmentation methods. Our Accuracy and Area Under Curve achieve the highest scores of 96.96%, 96.57%, 96.48% and 98.78%, 98.54%, 98.27% on DRIVE, CHASE_DB1, and HRF datasets. Our Specificity achieves the highest score of 98.58% and 99.08% on DRIVE and STARE datasets.Significance.The experimental results demonstrate that our method has strong learning and representation capabilities and can accurately detect retinal blood vessels, thereby serving as a potential tool for assisting in diagnosis.
Collapse
Affiliation(s)
- Yi Huang
- School of Information Science and Technology, Southwest Jiaotong University, 611756, Chengdu, People's Republic of China
| | - Tao Deng
- School of Information Science and Technology, Southwest Jiaotong University, 611756, Chengdu, People's Republic of China
| |
Collapse
|
3
|
Kim SO, Kim YC. Effects of Path-Finding Algorithms on the Labeling of the Centerlines of Circle of Willis Arteries. Tomography 2023; 9:1423-1433. [PMID: 37489481 PMCID: PMC10366843 DOI: 10.3390/tomography9040113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 07/21/2023] [Accepted: 07/21/2023] [Indexed: 07/26/2023] Open
Abstract
Quantitative analysis of intracranial vessel segments typically requires the identification of the vessels' centerlines, and a path-finding algorithm can be used to automatically detect vessel segments' centerlines. This study compared the performance of path-finding algorithms for vessel labeling. Three-dimensional (3D) time-of-flight magnetic resonance angiography (MRA) images from the publicly available dataset were considered for this study. After manual annotations of the endpoints of each vessel segment, three path-finding methods were compared: (Method 1) depth-first search algorithm, (Method 2) Dijkstra's algorithm, and (Method 3) A* algorithm. The rate of correctly found paths was quantified and compared among the three methods in each segment of the circle of Willis arteries. In the analysis of 840 vessel segments, Method 2 showed the highest accuracy (97.1%) of correctly found paths, while Method 1 and 3 showed an accuracy of 83.5% and 96.1%, respectively. The AComm artery was highly inaccurately identified in Method 1, with an accuracy of 43.2%. Incorrect paths by Method 2 were noted in the R-ICA, L-ICA, and R-PCA-P1 segments. The Dijkstra and A* algorithms showed similar accuracy in path-finding, and they were comparable in the speed of path-finding in the circle of Willis arterial segments.
Collapse
Affiliation(s)
- Se-On Kim
- Division of Digital Healthcare, College of Software and Digital Healthcare Convergence, Yonsei University, Wonju 26493, Republic of Korea
| | - Yoon-Chul Kim
- Division of Digital Healthcare, College of Software and Digital Healthcare Convergence, Yonsei University, Wonju 26493, Republic of Korea
| |
Collapse
|
4
|
Yang K, Chang S, Yuan J, Fu S, Qin G, Liu S, Liu K, Zhao Q, Xue L. Robust vessel segmentation in laser speckle contrast images based on semi-weakly supervised learning. Phys Med Biol 2023. [PMID: 37327795 DOI: 10.1088/1361-6560/acdf37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
OBJECTIVE The goal of this study is to develop a robust semi-weakly supervised learning strategy for vessel segmentation in laser speckle contrast imaging (LSCI), addressing the challenges associated with the low signal-to-noise ratio, small vessel size, and irregular vascular aberration in diseased regions, while improving the performance and robustness of the segmentation method. APPROACH For the training dataset, the healthy vascular images denoted as normal-vessel samples were manually labeled, while the diseased LSCI images involving tumor or embolism were denoted as abnormal-vessel samples and annotated as pseudo labels by the traditional semantic segmentation methods. In the training phase, the pseudo labels were constantly updated to improve the segmentation accuracy based on DeepLabv3+. Objective evaluation was conducted on the normal-vessel test set, while subjective evaluation was performed on the abnormal-vessel test set. MAIN RESULTS The proposed method achieved an IOU of 0.8671, a Dice of 0.9288, and a mean relative percentage difference (mRPD) with supervised learning of 0.5% in the objective evaluation. In the subjective evaluation, our method significantly outperformed other methods in main vessel segmentation, tiny vessel segmentation, and blood vessel connection. Additionally, our method exhibited robustness when abnormal-vessel style noise was added to normal-vessel samples using a style translation network. SIGNIFICANCE The proposed semi-weakly supervised learning strategy demonstrates high efficiency and excellent robustness for vascular segmentation in LSCI, providing a potential tool for assessing the morphological and structural features of vessels in clinical applications.
Collapse
Affiliation(s)
- Kun Yang
- College of Quality and Technical Supervision, Hebei University, 180 Wusi East Road, Baoding, Hebei, 071002, CHINA
| | - Shilong Chang
- College of Quality and Technical Supervision, Hebei University, 180 Wusi East Road, Baoding, Hebei, 071002, CHINA
| | - Jiacheng Yuan
- College of Quality and Technical Supervision, Hebei University, 180 Wusi East Road, Baoding, Hebei, 071002, CHINA
| | - Suzhong Fu
- Center for Molecular Imaging and Translational Medicine, Department of Laboratory Medicine, School of Public Health, Xiamen University, No. 4221-116, Xiang'an South Road, Xiang'an District, Xiamen City, Fujian Province, Xiamen, Fujian, 361005, CHINA
| | - Geng Qin
- College of Quality and Technical Supervision, Hubei University, 180 Wusi East Road, Baoding, Hubei, 071002, CHINA
| | - Shuang Liu
- College of Quality and Technical Supervision, Hebei University, 180 Wusi East Road, Baoding, Hebei, 071002, CHINA
| | - Kun Liu
- College of Quality and Technical Supervision, Hebei University, 180 Wusi East Road, Baoding, Hebei, 071002, CHINA
| | - Qingliang Zhao
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics, Xiamen University Xiang'an Campus, Xiang'an South Road, Xiamen 361102, Fujian Province, China, Xiamen, Fujian, 361102, CHINA
| | - Linyan Xue
- College of Quality and Technical Supervision, Hebei University, 180 Wusi East Road, Baoding, Hebei, 071002, CHINA
| |
Collapse
|
5
|
Zhang J, Sha D, Ma Y, Zhang D, Tan T, Xu X, Yi Q, Zhao Y. Joint conditional generative adversarial networks for eyelash artifact removal in ultra-wide-field fundus images. Front Cell Dev Biol 2023; 11:1181305. [PMID: 37215081 PMCID: PMC10196374 DOI: 10.3389/fcell.2023.1181305] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023] Open
Abstract
Background: Ultra-Wide-Field (UWF) fundus imaging is an essential diagnostic tool for identifying ophthalmologic diseases, as it captures detailed retinal structures within a wider field of view (FOV). However, the presence of eyelashes along the edge of the eyelids can cast shadows and obscure the view of fundus imaging, which hinders reliable interpretation and subsequent screening of fundus diseases. Despite its limitations, there are currently no effective methods or datasets available for removing eyelash artifacts from UWF fundus images. This research aims to develop an effective approach for eyelash artifact removal and thus improve the visual quality of UWF fundus images for accurate analysis and diagnosis. Methods: To address this issue, we first constructed two UWF fundus datasets: the paired synthetic eyelashes (PSE) dataset and the unpaired real eyelashes (uPRE) dataset. Then we proposed a deep learning architecture called Joint Conditional Generative Adversarial Networks (JcGAN) to remove eyelash artifacts from UWF fundus images. JcGAN employs a shared generator with two discriminators for joint learning of both real and synthetic eyelash artifacts. Furthermore, we designed a background refinement module that refines background information and is trained with the generator in an end-to-end manner. Results: Experimental results on both PSE and uPRE datasets demonstrate the superiority of the proposed JcGAN over several state-of-the-art deep learning approaches. Compared with the best existing method, JcGAN improves PSNR and SSIM by 4.82% and 0.23%, respectively. In addition, we also verified that eyelash artifact removal via JcGAN could significantly improve vessel segmentation performance in UWF fundus images. Assessment via vessel segmentation illustrates that the sensitivity, Dice coefficient and area under curve (AUC) of ResU-Net have respectively increased by 3.64%, 1.54%, and 1.43% after eyelash artifact removal using JcGAN. Conclusion: The proposed JcGAN effectively removes eyelash artifacts in UWF images, resulting in improved visibility of retinal vessels. Our method can facilitate better processing and analysis of retinal vessels and has the potential to improve diagnostic outcomes.
Collapse
Affiliation(s)
- Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| | - Dengfeng Sha
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
| | - Yuhui Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Dan Zhang
- School of Cyber Science and Engineering, Ningbo University of Technology, Ningbo, China
| | - Tao Tan
- Faulty of Applied Sciences, Macao Polytechnic University, Macao, Macao SAR, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi’an Jiaotong University, Xi’an, China
- Zhejiang Research Institute of Xi’an Jiaotong University, Hangzhou, China
| | - Quanyong Yi
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| |
Collapse
|
6
|
Shi M, Ji A. [Research on Vessel Segmentation and Extraction Method Based on Improved 2D Gabor Features]. Zhongguo Yi Liao Qi Xie Za Zhi 2023; 47:124-128. [PMID: 37096462 DOI: 10.3969/j.issn.1671-7104.2023.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
This study proposed a vessel segmentation method based on Gabor features. According to the eigenvector of Hessian matrix of each pixel in the image, the vessel direction of each point was obtained to set the direction angle of Gabor filter, and the Gabor features of different vessel width at each point were extracted to establish the 6D vectors of each point. By reducing the dimension of the 6D vector, the 2D vector of each point was obtained and fused with the original image G channel. U-Net neural network was used to classify the fused image to segment vessels. The experimental results of this method in DRIVE dataset showed that this method had a good effect on the detection of small vessels and vessels at the intersection.
Collapse
Affiliation(s)
- Mao Shi
- Lab of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics & Astronautics, Nanjing, 210016
| | - Aihong Ji
- Lab of Locomotion Bioinspiration and Intelligent Robots, College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics & Astronautics, Nanjing, 210016
| |
Collapse
|
7
|
Liu Y, Carass A, Zuo L, He Y, Han S, Gregori L, Murray S, Mishra R, Lei J, Calabresi PA, Saidha S, Prince JL. Disentangled Representation Learning for OCTA Vessel Segmentation With Limited Training Data. IEEE Trans Med Imaging 2022; 41:3686-3698. [PMID: 35862335 PMCID: PMC9910788 DOI: 10.1109/tmi.2022.3193029] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Optical coherence tomography angiography (OCTA) is an imaging modality that can be used for analyzing retinal vasculature. Quantitative assessment of en face OCTA images requires accurate segmentation of the capillaries. Using deep learning approaches for this task faces two major challenges. First, acquiring sufficient manual delineations for training can take hundreds of hours. Second, OCTA images suffer from numerous contrast-related artifacts that are currently inherent to the modality and vary dramatically across scanners. We propose to solve both problems by learning a disentanglement of an anatomy component and a local contrast component from paired OCTA scans. With the contrast removed from the anatomy component, a deep learning model that takes the anatomy component as input can learn to segment vessels with a limited portion of the training images being manually labeled. Our method demonstrates state-of-the-art performance for OCTA vessel segmentation.
Collapse
|
8
|
Tang W, Deng H, Yin S. CPMF-Net: Multi-Feature Network Based on Collaborative Patches for Retinal Vessel Segmentation. Sensors (Basel) 2022; 22:9210. [PMID: 36501911 PMCID: PMC9736046 DOI: 10.3390/s22239210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/23/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
As an important basis of clinical diagnosis, the morphology of retinal vessels is very useful for the early diagnosis of some eye diseases. In recent years, with the rapid development of deep learning technology, automatic segmentation methods based on it have made considerable progresses in the field of retinal blood vessel segmentation. However, due to the complexity of vessel structure and the poor quality of some images, retinal vessel segmentation, especially the segmentation of Capillaries, is still a challenging task. In this work, we propose a new retinal blood vessel segmentation method, called multi-feature segmentation, based on collaborative patches. First, we design a new collaborative patch training method which effectively compensates for the pixel information loss in the patch extraction through information transmission between collaborative patches. Additionally, the collaborative patch training strategy can simultaneously have the characteristics of low occupancy, easy structure and high accuracy. Then, we design a multi-feature network to gather a variety of information features. The hierarchical network structure, together with the integration of the adaptive coordinate attention module and the gated self-attention module, enables these rich information features to be used for segmentation. Finally, we evaluate the proposed method on two public datasets, namely DRIVE and STARE, and compare the results of our method with those of other nine advanced methods. The results show that our method outperforms other existing methods.
Collapse
|
9
|
Rodrigues EO, Rodrigues LO, Machado JHP, Casanova D, Teixeira M, Oliva JT, Bernardes G, Liatsis P. Local-Sensitive Connectivity Filter (LS-CF): A Post-Processing Unsupervised Improvement of the Frangi, Hessian and Vesselness Filters for Multimodal Vessel Segmentation. J Imaging 2022; 8:jimaging8100291. [PMID: 36286385 PMCID: PMC9604711 DOI: 10.3390/jimaging8100291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/13/2022] [Accepted: 09/16/2022] [Indexed: 11/07/2022] Open
Abstract
A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that computes pixel-level vessel continuity while introducing a local tolerance heuristic to fill in vessel discontinuities produced by the Frangi response. This proposal, called the local-sensitive connectivity filter (LS-CF), is compared against a naive connectivity filter to the baseline thresholded Frangi filter response and to the naive connectivity filter response in combination with the morphological closing and to the current approaches in the literature. The proposal was able to achieve competitive results in a variety of multimodal datasets. It was robust enough to outperform all the state-of-the-art approaches in the literature for the OSIRIX angiographic dataset in terms of accuracy and 4 out of 5 works in the case of the IOSTAR dataset while also outperforming several works in the case of the DRIVE and STARE datasets and 6 out of 10 in the CHASE-DB dataset. For the CHASE-DB, it also outperformed all the state-of-the-art unsupervised methods.
Collapse
Affiliation(s)
- Erick O. Rodrigues
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
- Correspondence:
| | - Lucas O. Rodrigues
- Graduate Program of Sciences Applied to Health Products, Universidade Federal Fluminense (UFF), Niteroi 24241-000, RJ, Brazil
| | - João H. P. Machado
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Dalcimar Casanova
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Marcelo Teixeira
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Jeferson T. Oliva
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Giovani Bernardes
- Institute of Technological Sciences (ICT), Universidade Federal de Itajuba (UNIFEI), Itabira 35903-087, MG, Brazil
| | - Panos Liatsis
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
| |
Collapse
|
10
|
Yu X, Ge C, Aziz MZ, Li M, Shum PP, Liu L, Mo J. CGNet-assisted Automatic Vessel Segmentation for Optical Coherence Tomography Angiography. J Biophotonics 2022; 15:e202200067. [PMID: 35704010 DOI: 10.1002/jbio.202200067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 05/01/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
Automatic optical coherence tomography angiography (OCTA) vessel segmentation is of great significance to retinal disease diagnoses. Due to the complex vascular structure, however, various existing factors make the segmentation task challenging. This paper reports a novel end-to-end three-stage channel and position attention (CPA) module integrated graph reasoning convolutional neural network (CGNet) for retinal OCTA vessel segmentation. Specifically, in the coarse stage, both CPA and graph reasoning network (GRN) modules are integrated in between a U-shaped neural network encoder and decoder to acquire vessel confidence maps. After being directed into a fine stage, such confidence maps are concatenated with the original image and the generated fine image map as a 3-channel image to refine retinal micro-vasculatures. Finally, both the fine and refined images are fused at the refining stage as the segmentation results. Experiments with different public datasets are conducted to verify the efficacy of the proposed CGNet. Results show that by employing the end-to-end training scheme and the integrated CPA and GRN modules, CGNet achieves 94.29% and 85.62% in area under the ROC curve (AUC) for the two different datasets, outperforming the state-of-the-art existing methods with both improved operability and reduced complexity in different cases. Code is available at https://github.com/GE-123-cpu/CGnet-for-vessel-segmentation.
Collapse
Affiliation(s)
- Xiaojun Yu
- School of Automation, Northwestern Polytechnical University, Xi'an, China
- Shenzhen Research Institute of Northwestern Polytechnical University, Shenzhen, Guangdong, China
| | - Chenkun Ge
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | | | - Mingshuai Li
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Perry Ping Shum
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Linbo Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Jianhua Mo
- School of Electronics and Information Engineering, Soochow University, Suzhou, China
| |
Collapse
|
11
|
Kakileti ST, Shrivastava R, Manjunath G, Vidyasagar M, Graewingholt A. Automated vascular analysis of breast thermograms with interpretable features. J Med Imaging (Bellingham) 2022; 9:044502. [PMID: 35937560 PMCID: PMC9350687 DOI: 10.1117/1.jmi.9.4.044502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 07/18/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Vascular changes are observed from initial stages of breast cancer, and monitoring of vessel structures helps in early detection of malignancies. In recent years, thermal imaging is being evaluated as a low-cost imaging modality to visualize and analyze early vascularity. However, visual inspection of thermal vascularity is challenging and subjective. Therefore, there is a need for automated techniques to assist physicians in visualization and interpretation of vascularity by marking the vessel structures and by providing quantified qualitative parameters that helps in malignancy classification Approach: In the literature, there are very few approaches for vascular analysis and classification of breast thermal images using interpretable vascular features. One major challenge is the automated detection of breast vascularity due to diffused vessel boundaries. We first propose a deep learning-based semantic segmentation approach that generates heatmaps of vessel structures from two-dimensional breast thermal images for quantitative assessment of breast vascularity. Second, we extract interpretable vascular parameters and propose a classifier to predict likelihood of breast cancer purely from the extracted vascular parameters. Results: The results of the cancer classifier were validated using an independent clinical dataset consisting of 258 participants. The results were encouraging as the proposed approach segmented vessels well and gave a good classification performance with area under receiver operating characteristic curve of 0.85 with the proposed vascularity parameters. Conclusions: The detected vasculature and its associated high classification performance show the utility of the proposed approach in interpretation of breast vascularity.
Collapse
|
12
|
Han T, Ai D, Wang Y, Bian Y, An R, Fan J, Song H, Xie H, Yang J. Recursive Centerline- and Direction-Aware Joint Learning Network with Ensemble Strategy for Vessel Segmentation in X-ray Angiography Images. Comput Methods Programs Biomed 2022; 220:106787. [PMID: 35436660 DOI: 10.1016/j.cmpb.2022.106787] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 03/05/2022] [Accepted: 03/17/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic vessel segmentation from X-ray angiography images is an important research topic for the diagnosis and treatment of cardiovascular disease. The main challenge is how to extract continuous and completed vessel structures from XRA images with poor quality and high complexity. Most existing methods predominantly focus on pixel-wise segmentation and overlook the geometric features, resulting in breaking and absence in segmentation results. To improve the completeness and accuracy of vessel segmentation, we propose a recursive joint learning network embedded with geometric features. METHODS The network joins the centerline- and direction-aware auxiliary tasks with the primary task of segmentation, which guides the network to explore the geometric features of vessel connectivity. Moreover, the recursive learning strategy is designed by passing the previous segmentation result into the same network iteratively to improve segmentation. To further enhance connectivity, we present a complementary-task ensemble strategy by fusing the outputs of the three tasks for the final segmentation result with majority voting. RESULTS To validate the effectiveness of our method, we conduct qualitative and quantitative experiments on the XRA images of the coronary artery and aorta including aortic arch, thoracic aorta, and abdominal aorta. Our method achieves F1 scores of 85.61±3.48% for the coronary artery, 89.02±2.89% for the aortic arch, 88.22±3.33% for the thoracic aorta, and 83.12±4.61% for the abdominal aorta. CONCLUSIONS Compared with six state-of-the-art methods, our method shows the most complete and accurate vessel segmentation results.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Yonglin Bian
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Ruirui An
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Hongzhi Xie
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
13
|
Meng C, Xu Y, Li N, Li Y, Ren L, Xia K. Incremental robust PCA for vessel segmentation in DSA sequences. Biomed Phys Eng Express 2022; 8. [PMID: 35439744 DOI: 10.1088/2057-1976/ac682b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/19/2022] [Indexed: 11/12/2022]
Abstract
In intervention surgery, DSA images provide a new way to observe the vessels and catheters inside the patient. Extracting coronary artery from the dynamic complex background fast improves the effectiveness directly in clinical interventional surgery. This article proposes an incremental robust principal component analysis (IRPCA) method to extract contrast-filled vessels from x-ray coronary angiograms. RPCA is a matrix decomposition method that decomposes a video matrix into foreground and background, commonly used to model complex backgrounds and extract target objects. IRPCA pre-optimizes an x-ray image sequence. When a new x-ray sequence is received, IRPCA optimizes it based on the pre-optimized matrix according to the strategy of minimizing the energy function to obtain the foreground matrix of the new sequence. Besides, based on the idea that the new x-ray sequence introduces new information to the pre-optimized matrix, we propose UIRPCA to improve the performence of IRPCA. Compared with the traditional RPCA method, IRPCA and UIRPCA save much time while ensuring that other indicators remain basically unchanged. The experiment results based on real data show the superiority of the proposed method over other RPCA algorithms.
Collapse
Affiliation(s)
- Cai Meng
- Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, People's Republic of China.,Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100083, People's Republic of China
| | - Yizhou Xu
- Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, People's Republic of China
| | - Ning Li
- Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, People's Republic of China
| | - Yanggang Li
- Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, People's Republic of China
| | - Longfei Ren
- Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, People's Republic of China
| | - Kun Xia
- Beijing Chaoyang hospital, Medical University of Capital Science, Beijing 100020, People's Republic of China
| |
Collapse
|
14
|
Toufeeq S, Gottlob I, Tu Z, Proudlock FA, Pilat A. Abnormal Retinal Vessel Architecture in Albinism and Idiopathic Infantile Nystagmus. Invest Ophthalmol Vis Sci 2022; 63:33. [PMID: 35616929 PMCID: PMC9150830 DOI: 10.1167/iovs.63.5.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Infantile nystagmus syndrome (INS) causes altered visual development and can be associated with abnormal retinal structure, to which vascular development of the retina is closely related. Abnormal retinal vasculature has previously been noted in albinism but not idiopathic infantile nystagmus. We compared the number and diameter of retinal vessels in participants with albinism (PWA) and idiopathic infantile nystagmus (PWIIN) with controls. Methods Fundus photography data from 24 PWA, 10 PWIIN, and 34 controls was analyzed using Automated Retinal Image Analyzer (ARIA) software on a field of analysis centered on the optic disc, the annulus of which extended between 4.2 mm and 8.4 mm in diameter. Results Compared with controls, the mean number of arterial branches was reduced by 24% in PWA (15.5 vs. 20.3, P < 0.001), and venous branches were reduced in both PWA (29%; 12.9 vs. 18.2, P < 0.001) and PWIIN (17%; 15.1 vs. 18.2, P = 0.024). PWA demonstrated 7% thinner "primary" (before branching) arteries (mean diameter: 75.39 µm vs. 80.88 µm, P = 0.043), and 13% thicker (after branching) "secondary" veins (66.72 µm vs. 59.01 µm in controls, P = 0.009). Conclusions PWA and PWIIN demonstrated reduced retinal vessel counts and arterial diameters compared with controls. These changes in the superficial retinal vascular network may be secondary to underdevelopment of the neuronal network, which guides vascular development and is also known to be disrupted in INS.
Collapse
Affiliation(s)
- Shafak Toufeeq
- Oxford Eye Hospital, Level LG1 John Radcliffe Hospital, Headley Way, Headington, Oxford OX3 9DU, United Kingdom
| | - Irene Gottlob
- Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University Of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Zhanhan Tu
- Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University Of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Frank A. Proudlock
- Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University Of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Anastasia Pilat
- Ulverscroft Eye Unit, Department of Neuroscience, Psychology and Behaviour, University Of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| |
Collapse
|
15
|
Sun M, Wang Y, Fu Z, Li L, Liu Y, Zhao X. A Machine Learning Method for Automated In Vivo Transparent Vessel Segmentation and Identification Based on Blood Flow Characteristics. Microsc Microanal 2022; 28:1-14. [PMID: 35387704 DOI: 10.1017/s1431927622000514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In vivo transparent vessel segmentation is important to life science research. However, this task remains very challenging because of the fuzzy edges and the barely noticeable tubular characteristics of vessels under a light microscope. In this paper, we present a new machine learning method based on blood flow characteristics to segment the global vascular structure in vivo. Specifically, the videos of blood flow in transparent vessels are used as input. We use the machine learning classifier to classify the vessel pixels through the motion features extracted from moving red blood cells and achieve vessel segmentation based on a region-growing algorithm. Moreover, we utilize the moving characteristics of blood flow to distinguish between the types of vessels, including arteries, veins, and capillaries. In the experiments, we evaluate the performance of our method on videos of zebrafish embryos. The experimental results indicate the high accuracy of vessel segmentation, with an average accuracy of 97.98%, which is much more superior than other segmentation or motion-detection algorithms. Our method has good robustness when applied to input videos with various time resolutions, with a minimum of 3.125 fps.
Collapse
Affiliation(s)
- Mingzhu Sun
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Yiwen Wang
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Zhenhua Fu
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Lu Li
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Yaowei Liu
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Xin Zhao
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| |
Collapse
|
16
|
Chakshu NK, Carson JM, Sazonov I, Nithiarasu P. Automating fractional flow reserve (FFR) calculation from CT scans: A rapid workflow using unsupervised learning and computational fluid dynamics. Int J Numer Method Biomed Eng 2022; 38:e3559. [PMID: 34865317 DOI: 10.1002/cnm.3559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 10/18/2021] [Accepted: 11/30/2021] [Indexed: 06/13/2023]
Abstract
Fractional flow reserve (FFR) provides the functional relevance of coronary atheroma. The FFR-guided strategy has been shown to reduce unnecessary stenting, improve overall health outcome, and to be cost-saving. The non-invasive, coronary computerised tomography (CT) angiography-derived FFR (cFFR) is an emerging method in reducing invasive catheter based measurements. This computational fluid dynamics-based method is laborious as it requires expertise in multidisciplinary analysis of combining image analysis and computational mechanics. In this work, we present a rapid method, powered by unsupervised learning, to automatically calculate cFFR from CT scans without manual intervention.
Collapse
Affiliation(s)
- Neeraj Kavan Chakshu
- Biomedical Engineering Group, Zienkiewicz Centre for Computational Engineering, Faculty of Science and Engineering, Swansea University, Swansea, UK
| | - Jason M Carson
- Biomedical Engineering Group, Zienkiewicz Centre for Computational Engineering, Faculty of Science and Engineering, Swansea University, Swansea, UK
| | - Igor Sazonov
- Biomedical Engineering Group, Zienkiewicz Centre for Computational Engineering, Faculty of Science and Engineering, Swansea University, Swansea, UK
| | - Perumal Nithiarasu
- Biomedical Engineering Group, Zienkiewicz Centre for Computational Engineering, Faculty of Science and Engineering, Swansea University, Swansea, UK
| |
Collapse
|
17
|
Chen H, Shi Y, Bo B, Zhao D, Miao P, Tong S, Wang C. Real-Time Cerebral Vessel Segmentation in Laser Speckle Contrast Image Based on Unsupervised Domain Adaptation. Front Neurosci 2021; 15:755198. [PMID: 34916898 PMCID: PMC8669333 DOI: 10.3389/fnins.2021.755198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 10/20/2021] [Indexed: 12/02/2022] Open
Abstract
Laser speckle contrast imaging (LSCI) is a full-field, high spatiotemporal resolution and low-cost optical technique for measuring blood flow, which has been successfully used for neurovascular imaging. However, due to the low signal-noise ratio and the relatively small sizes, segmenting the cerebral vessels in LSCI has always been a technical challenge. Recently, deep learning has shown its advantages in vascular segmentation. Nonetheless, ground truth by manual labeling is usually required for training the network, which makes it difficult to implement in practice. In this manuscript, we proposed a deep learning-based method for real-time cerebral vessel segmentation of LSCI without ground truth labels, which could be further integrated into intraoperative blood vessel imaging system. Synthetic LSCI images were obtained with a synthesis network from LSCI images and public labeled dataset of Digital Retinal Images for Vessel Extraction, which were then used to train the segmentation network. Using matching strategies to reduce the size discrepancy between retinal images and laser speckle contrast images, we could further significantly improve image synthesis and segmentation performance. In the testing LSCI images of rodent cerebral vessels, the proposed method resulted in a dice similarity coefficient of over 75%.
Collapse
Affiliation(s)
- Heping Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- School of Technology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Yan Shi
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Bo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Denghui Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Peng Miao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shanbao Tong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chunliang Wang
- School of Technology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
18
|
Wang YY, Glinskii OV, Bunyak F, Palaniappan K. Ensemble of Deep Learning Cascades for Segmentation of Blood Vessels in Confocal Microscopy Images. IEEE Appl Imag Pattern Recognit Workshop 2021; 2021:10.1109/aipr52630.2021.9762193. [PMID: 35506042 PMCID: PMC9060211 DOI: 10.1109/aipr52630.2021.9762193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Detection, segmentation, and quantification of microvascular structures are the main steps towards studying microvascular remodeling. Combined with appropriate staining, confocal microscopy imaging enables exploration of the full 3D anatomical characteristics of microvascular systems. Segmentation of confocal microscopy images is a challenging task due to complexity of anatomical structures, staining and imaging issues, and lack of annotated training data. In this paper, we propose a deep learning system for robust segmentation of cranial vasculature of mice in confocal microscopy images. The proposed system is an ensemble of two deep-learning cascades consisting of two coarse-to-fine subnetworks with skip connections in between. One cascade aims to improve sensitivity, while the other aims to improve precision of the segmentation results. Our experiments on mice cranial vasculature showed promising results achieving segmentation accuracy of 92.02% and dice score of 81.45% despite being trained on very limited confocal microscopy data.
Collapse
Affiliation(s)
- Yang Yang Wang
- Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO, USA
| | - O V Glinskii
- Department of Medical Pharmacology and Physiology, University of Missouri-Columbia, MO, USA
- Dalton Cardiovascular Research Center, University of Missouri-Columbia, MO, USA
| | - Filiz Bunyak
- Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO, USA
| | - Kannappan Palaniappan
- Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO, USA
| |
Collapse
|
19
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
20
|
Jiang Z, Lei Y, Zhang L, Ni W, Gao C, Gao X, Yang H, Su J, Xiao W, Yu J, Gu Y. Automated Quantitative Analysis of Blood Flow in Extracranial-Intracranial Arterial Bypass Based on Indocyanine Green Angiography. Front Surg 2021; 8:649719. [PMID: 34179066 PMCID: PMC8225942 DOI: 10.3389/fsurg.2021.649719] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 04/20/2021] [Indexed: 11/13/2022] Open
Abstract
Microvascular imaging based on indocyanine green is an important tool for surgeons who carry out extracranial–intracranial arterial bypass surgery. In terms of blood perfusion, indocyanine green images contain abundant information, which cannot be effectively interpreted by humans or currently available commercial software. In this paper, an automatic processing framework for perfusion assessments based on indocyanine green videos is proposed and consists of three stages, namely, vessel segmentation based on the UNet deep neural network, preoperative and postoperative image registrations based on scale-invariant transform features, and blood flow evaluation based on the Horn–Schunck optical flow method. This automatic processing flow can reveal the blood flow direction and intensity curve of any vessel, as well as the blood perfusion changes before and after an operation. Commercial software embedded in a microscope is used as a reference to evaluate the effectiveness of the algorithm in this study. A total of 120 patients from multiple centers were sampled for the study. For blood vessel segmentation, a Dice coefficient of 0.80 and a Jaccard coefficient of 0.73 were obtained. For image registration, the success rate was 81%. In preoperative and postoperative video processing, the coincidence rates between the automatic processing method and commercial software were 89 and 87%, respectively. The proposed framework not only achieves blood perfusion analysis similar to that of commercial software but also automatically detects and matches blood vessels before and after an operation, thus quantifying the flow direction and enabling surgeons to intuitively evaluate the perfusion changes caused by bypass surgery.
Collapse
Affiliation(s)
- Zhuoyun Jiang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yu Lei
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Liqiong Zhang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Wei Ni
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Chao Gao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Xinjie Gao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Heng Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jiabin Su
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Weiping Xiao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yuxiang Gu
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
21
|
Chen Y, Fan S, Chen Y, Che C, Cao X, He X, Song X, Zhao F. Vessel segmentation from volumetric images: a multi-scale double-pathway network with class-balanced loss at the voxel level. Med Phys 2021; 48:3804-3814. [PMID: 33969487 DOI: 10.1002/mp.14934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 02/28/2021] [Accepted: 04/29/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Vessel segmentation from volumetric medical images is becoming an essential pre-step in aiding the diagnosis, guiding the therapy, and patient management for vascular-related diseases. Deep learning-based methods have drawn many attentions, but most of them did not fully utilize the multi-scale spatial information of vessels. To address this shortcoming, we propose a multi-scale network similar to the well-known multi-scale DeepMedic. It also includes a double-pathway architecture and a class-balanced loss at the voxel level (MDNet-Vb) to achieve both the computation efficiency and segmentation accuracy. METHODS The proposed network consists two parallel pathways to learn the multi-scale vessel morphology. Specifically, the pathway with a normal resolution uses three-dimensional (3D) U-Net fed with small inputs to learn the local details with relatively small storage and time consumption. The pathway with a low-resolution employs 3D fully convolutional network (FCN) fed with downsampled large inputs to learn the overall spatial relationships between vessels and adjacent tissues, and the morphological information of large vessels. To cope with the class-imbalanced issue in vessel segmentation, we propose a class-balanced loss at the voxel level with uniform sampling strategy. The class-balanced loss at the voxel level re-balances the loss function with a coefficient that is inversely proportional to the normalized effective number at the voxel level of each class. The uniform sampling strategy extracts training data by sampling uniformly from two classes in every epoch. RESULTS Our MDNet-Vb outperforms several state-of-the-art methods including ResNet, DenseNet, 3D U-Net, V-Net, and DeepMedic with the highest dice coefficients of 72.91% and 69.32% on cardiac computed tomography angiography (CTA) dataset and cerebral magnetic resonance angiography (MRA) dataset, respectively. Among four different double-pathway networks, our network (3D U-Net+3D FCN) not only has the fewest training parameters and shortest training time, but also gets competitive dice coefficients on both the CTA and MRA datasets. Compared with classical losses, our class-balanced focal loss (FL-Vb) and dice coefficient loss at the voxel level (Dsc-Vb) alleviates class imbalanced issue by improving both the sensitivity and dice coefficient on the CTA and MRA datasets. Moreover, simultaneously training on two datasets shows that our method has the highest dice coefficient of 73.06% and 65.40% on CTA and MRA datasets, respectively, outperforming the commonly used methods, such as U-Net and DeepMedic, which demonstrates the generalization potential of our network for segmenting different blood vessels. CONCLUSIONS Our MDNet-Vb method demonstrates its superiority over other state-of-the-art methods, on both cardiac CTA and cerebral MRA datasets. For the network architecture, the MDNet-Vb combined the 3D U-Net and 3D FCN, which dramatically reduces the network parameters yet maintains the segmentation accuracy. The class-balanced loss at the voxel level further improves accuracy by properly alleviating the class-imbalanced issue between different classes. In summary, MDNet-Vb is promising for vessel segmentation from various volumetric medical images.
Collapse
Affiliation(s)
- Yibing Chen
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Siqi Fan
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Yongfeng Chen
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Chang Che
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Xin Cao
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Xiaowei He
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Xiaolei Song
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| | - Fengjun Zhao
- Xi'an Key Lab of Radiomics and Intelligent Perception, School of Information Sciences and Technology, Northwest University, Xi'an, Shaanxi, 710069, China
| |
Collapse
|
22
|
Jo HC, Jeong H, Lee J, Na KS, Kim DY. Quantification of Blood Flow Velocity in the Human Conjunctival Microvessels Using Deep Learning-Based Stabilization Algorithm. Sensors (Basel) 2021; 21:s21093224. [PMID: 34066590 PMCID: PMC8124391 DOI: 10.3390/s21093224] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 04/30/2021] [Accepted: 05/01/2021] [Indexed: 11/27/2022]
Abstract
The quantification of blood flow velocity in the human conjunctiva is clinically essential for assessing microvascular hemodynamics. Since the conjunctival microvessel is imaged in several seconds, eye motion during image acquisition causes motion artifacts limiting the accuracy of image segmentation performance and measurement of the blood flow velocity. In this paper, we introduce a novel customized optical imaging system for human conjunctiva with deep learning-based segmentation and motion correction. The image segmentation process is performed by the Attention-UNet structure to achieve high-performance segmentation results in conjunctiva images with motion blur. Motion correction processes with two steps—registration and template matching—are used to correct for large displacements and fine movements. The image displacement values decrease to 4–7 μm during registration (first step) and less than 1 μm during template matching (second step). With the corrected images, the blood flow velocity is calculated for selected vessels considering temporal signal variances and vessel lengths. These methods for resolving motion artifacts contribute insights into studies quantifying the hemodynamics of the conjunctiva, as well as other tissues.
Collapse
Affiliation(s)
- Hang-Chan Jo
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
- Center for Sensor Systems, Inha University, Incheon 22212, Korea
| | - Hyeonwoo Jeong
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
| | - Junhyuk Lee
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
| | - Kyung-Sun Na
- Department of Ophthalmology & Visual Science, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea
- Correspondence: (K.-S.N.); (D.-Y.K.); Tel.: +82-02-3779-1520 (K.-S.N.); +82-32-860-7394 (D.-Y.K.)
| | - Dae-Yu Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (H.-C.J.); (H.J.); (J.L.)
- Center for Sensor Systems, Inha University, Incheon 22212, Korea
- Inha Research Institute for Aerospace Medicine, Inha University, Incheon 22212, Korea
- Correspondence: (K.-S.N.); (D.-Y.K.); Tel.: +82-02-3779-1520 (K.-S.N.); +82-32-860-7394 (D.-Y.K.)
| |
Collapse
|
23
|
Tetteh G, Efremov V, Forkert ND, Schneider M, Kirschke J, Weber B, Zimmer C, Piraud M, Menze BH. DeepVesselNet: Vessel Segmentation, Centerline Prediction, and Bifurcation Detection in 3-D Angiographic Volumes. Front Neurosci 2020; 14:592352. [PMID: 33363452 PMCID: PMC7753013 DOI: 10.3389/fnins.2020.592352] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 11/16/2020] [Indexed: 11/13/2022] Open
Abstract
We present DeepVesselNet, an architecture tailored to the challenges faced when extracting vessel trees and networks and corresponding features in 3-D angiographic volumes using deep learning. We discuss the problems of low execution speed and high memory requirements associated with full 3-D networks, high-class imbalance arising from the low percentage (<3%) of vessel voxels, and unavailability of accurately annotated 3-D training data-and offer solutions as the building blocks of DeepVesselNet. First, we formulate 2-D orthogonal cross-hair filters which make use of 3-D context information at a reduced computational burden. Second, we introduce a class balancing cross-entropy loss function with false-positive rate correction to handle the high-class imbalance and high false positive rate problems associated with existing loss functions. Finally, we generate a synthetic dataset using a computational angiogenesis model capable of simulating vascular tree growth under physiological constraints on local network structure and topology and use these data for transfer learning. We demonstrate the performance on a range of angiographic volumes at different spatial scales including clinical MRA data of the human brain, as well as CTA microscopy scans of the rat brain. Our results show that cross-hair filters achieve over 23% improvement in speed, lower memory footprint, lower network complexity which prevents overfitting and comparable accuracy that does not differ from full 3-D filters. Our class balancing metric is crucial for training the network, and transfer learning with synthetic data is an efficient, robust, and very generalizable approach leading to a network that excels in a variety of angiography segmentation tasks. We observe that sub-sampling and max pooling layers may lead to a drop in performance in tasks that involve voxel-sized structures. To this end, the DeepVesselNet architecture does not use any form of sub-sampling layer and works well for vessel segmentation, centerline prediction, and bifurcation detection. We make our synthetic training data publicly available, fostering future research, and serving as one of the first public datasets for brain vessel tree segmentation and analysis.
Collapse
Affiliation(s)
- Giles Tetteh
- Department of Computer Science, TU München, München, Germany
| | - Velizar Efremov
- Department of Computer Science, TU München, München, Germany
- Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Nils D. Forkert
- Department of Radiology, University of Calgary, Calgary, AB, Canada
| | - Matthias Schneider
- Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Jan Kirschke
- Neuroradiology, Klinikum Rechts der Isar, TU München, München, Germany
| | - Bruno Weber
- Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Claus Zimmer
- Neuroradiology, Klinikum Rechts der Isar, TU München, München, Germany
| | - Marie Piraud
- Department of Computer Science, TU München, München, Germany
| | - Björn H. Menze
- Department of Computer Science, TU München, München, Germany
- Department for Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| |
Collapse
|
24
|
Gu L, Zhang X, You S, Zhao S, Liu Z, Harada T. Semi-Supervised Learning in Medical Images Through Graph-Embedded Random Forest. Front Neuroinform 2020; 14:601829. [PMID: 33240071 PMCID: PMC7683389 DOI: 10.3389/fninf.2020.601829] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 09/23/2020] [Indexed: 11/29/2022] Open
Abstract
One major challenge in medical imaging analysis is the lack of label and annotation which usually requires medical knowledge and training. This issue is particularly serious in the brain image analysis such as the analysis of retinal vasculature, which directly reflects the vascular condition of Central Nervous System (CNS). In this paper, we present a novel semi-supervised learning algorithm to boost the performance of random forest under limited labeled data by exploiting the local structure of unlabeled data. We identify the key bottleneck of random forest to be the information gain calculation and replace it with a graph-embedded entropy which is more reliable for insufficient labeled data scenario. By properly modifying the training process of standard random forest, our algorithm significantly improves the performance while preserving the virtue of random forest such as low computational burden and robustness over over-fitting. Our method has shown a superior performance on both medical imaging analysis and machine learning benchmarks.
Collapse
Affiliation(s)
- Lin Gu
- RIKEN AIP, Tokyo, Japan.,Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Xiaowei Zhang
- Bioinformatics Institute (BII), ASTAR, Singapore, Singapore
| | - Shaodi You
- Faculty of Science, Institute of Informatics, University of Amsterdam, Amsterdam, Netherlands
| | - Shen Zhao
- Department of Medical Physics, Western University, London, ON, Canada
| | - Zhenzhong Liu
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control, School of Mechanical Engineering, Tianjin University of Technology, Tianjin, China.,National Demonstration Center for Experimental Mechanical and Electrical Engineering Education, Tianjin University of Technology, Tianjin, China
| | - Tatsuya Harada
- RIKEN AIP, Tokyo, Japan.,Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
25
|
Islam MS, Wang JK, Johnson SS, Thurtell MJ, Kardon RH, Garvin MK. A Deep-Learning Approach for Automated OCT En-Face Retinal Vessel Segmentation in Cases of Optic Disc Swelling Using Multiple En-Face Images as Input. Transl Vis Sci Technol 2020; 9:17. [PMID: 32821471 PMCID: PMC7401896 DOI: 10.1167/tvst.9.2.17] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 12/29/2019] [Indexed: 11/24/2022] Open
Abstract
Purpose In cases of optic disc swelling, segmentation of projected retinal blood vessels from optical coherence tomography (OCT) volumes is challenging due to swelling-based shadowing artifacts. Based on our hypothesis that simultaneously considering vessel information from multiple projected retinal layers can substantially increase vessel visibility, in this work, we propose a deep-learning-based approach to segment vessels involving the simultaneous use of three OCT en-face images as input. Methods A human expert vessel tracing combining information from OCT en-face images of the retinal pigment epithelium (RPE), inner retina, and total retina as well as a registered fundus image served as the reference standard. The deep neural network was trained from the imaging data from 18 patients with optic disc swelling to output a vessel probability map from three OCT en-face input images. The vessels from the OCT en-face images were also manually traced in three separate stages to compare with the performance of the proposed approach. Results On an independent volume-matched test set of 18 patients, the proposed deep-learning-based approach outperformed the three OCT-based manual tracing stages. The manual tracing based on three OCT en-face images also outperformed the manual tracing using only the traditional RPE en-face image. Conclusions In cases of optic disc swelling, use of multiple en-face images enables better vessel segmentation when compared with the traditional use of a single en-face image. Translational Relevance Improved vessel segmentation approaches in cases of optic disc swelling can be used as features for an improved assessment of the severity and cause of the swelling.
Collapse
Affiliation(s)
- Mohammad Shafkat Islam
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
| | - Jui-Kai Wang
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
- Iowa City VA Health Care System and Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, USA
| | - Samuel S. Johnson
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
| | - Matthew J. Thurtell
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
| | - Randy H. Kardon
- Iowa City VA Health Care System and Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, USA
- Department of Ophthalmology and Visual Sciences, University of Iowa, Iowa City, IA, USA
| | - Mona K. Garvin
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA, USA
- Iowa City VA Health Care System and Iowa City VA Center for the Prevention and Treatment of Visual Loss, Iowa City, IA, USA
| |
Collapse
|
26
|
Khawaja A, Khan TM, Khan MAU, Nawaz SJ. A Multi-Scale Directional Line Detector for Retinal Vessel Segmentation. Sensors (Basel) 2019; 19:E4949. [PMID: 31766276 DOI: 10.3390/s19224949] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 11/02/2019] [Accepted: 11/08/2019] [Indexed: 11/16/2022]
Abstract
The assessment of transformations in the retinal vascular structure has a strong potential in indicating a wide range of underlying ocular pathologies. Correctly identifying the retinal vessel map is a crucial step in disease identification, severity progression assessment, and appropriate treatment. Marking the vessels manually by a human expert is a tedious and time-consuming task, thereby reinforcing the need for automated algorithms capable of quick segmentation of retinal features and any possible anomalies. Techniques based on unsupervised learning methods utilize vessel morphology to classify vessel pixels. This study proposes a directional multi-scale line detector technique for the segmentation of retinal vessels with the prime focus on the tiny vessels that are most difficult to segment out. Constructing a directional line-detector, and using it on images having only the features oriented along the detector’s direction, significantly improves the detection accuracy of the algorithm. The finishing step involves a binarization operation, which is again directional in nature, helps in achieving further performance improvements in terms of key performance indicators. The proposed method is observed to obtain a sensitivity of 0.8043, 0.8011, and 0.7974 for the Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), and Child Heart And health Study in England (CHASE_DB1) datasets, respectively. These results, along with other performance enhancements demonstrated by the conducted experimental evaluation, establish the validity and applicability of directional multi-scale line detectors as a competitive framework for retinal image segmentation.
Collapse
|
27
|
Kim D, Edjlali M, Turski P, Johnson KM. Composite MRA: statistical approach to generate an MR angiogram from multiple contrasts. Magn Reson Med 2019; 83:830-843. [PMID: 31556170 DOI: 10.1002/mrm.27966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 08/01/2019] [Accepted: 08/02/2019] [Indexed: 11/07/2022]
Abstract
PURPOSE To develop a method to use information from multiple MRI contrasts to produce a composite angiogram with reduced sequence-specific artifacts and improved vessel depiction. METHODS Bayesian posterior vessel probability was determined as a function of black blood (BB), contrast enhanced angiography (CE-MRA), and phase-contrast MRA (PC-MRA) intensities from training subjects (N = 4). To generate composite angiogram in evaluation subjects (N = 12), the voxel-wise vessel probabilities were weighted with a confidence measure and combined as a weighted product to yield angiogram intensity. For 23 internal carotid artery (ICA) segments (N = 23) from evaluation subjects, segmentation accuracy of composite MRA was evaluated and compared against CE-MRA using dice similarity coefficient (DSC). RESULTS The composite MRA suppressed venous contaminations in CE-MRA, reduced flow artifacts, and velocity aliasing seen in PC-MRA and removed signal ambiguities in BB images. For ICA segmentations, the composite MRA improved segmentation over CE-MRA per DSC (0.908 ± 0.037 vs. 0.765 ± 0.079). Compared with CE-MRA, the composite MRA showed conservative changes in vessel appearance to small threshold changes. However, small vessels that are sensitive to registration errors or visible only weakly in CE-MRA were susceptible to poor depiction in composite MRA. CONCLUSION By dynamically weighting vessel information from multiple contrasts and extracting their complementary information, the composite MRA produces reduced sequence-specific artifacts and improved vessel contrast. It is a promising technique for semi-automatic segmentation of vessels that are hard to segment because of artifacts.
Collapse
Affiliation(s)
- Dahan Kim
- Department of Physics, University of Wisconsin, Madison, Wisconsin.,Department of Medical Physics, University of Wisconsin, Madison, Wisconsin
| | - Myriam Edjlali
- Department of Neuroradiology, Université Paris-Descartes-Sorbonne-Paris-Cité, IMABRAIN-INSERM-UMR1266, DHU-Neurovasc, Centre Hospitalier Sainte-Anne, Paris, France
| | - Patrick Turski
- Department of Radiology, University of Wisconsin, Madison, Wisconsin
| | - Kevin M Johnson
- Department of Medical Physics, University of Wisconsin, Madison, Wisconsin.,Department of Radiology, University of Wisconsin, Madison, Wisconsin
| |
Collapse
|
28
|
Arsalan M, Owais M, Mahmood T, Cho SW, Park KR. Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-Based Semantic Segmentation. J Clin Med 2019; 8:E1446. [PMID: 31514466 PMCID: PMC6780110 DOI: 10.3390/jcm8091446] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/04/2019] [Accepted: 09/07/2019] [Indexed: 12/13/2022] Open
Abstract
Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset.
Collapse
Affiliation(s)
- Muhammad Arsalan
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Muhammad Owais
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Tahir Mahmood
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Se Woon Cho
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| | - Kang Ryoung Park
- Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea.
| |
Collapse
|
29
|
Gu X, Wang J, Zhao J, Li Q. Segmentation and suppression of pulmonary vessels in low-dose chest CT scans. Med Phys 2019; 46:3603-3614. [PMID: 31240721 DOI: 10.1002/mp.13648] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 04/29/2019] [Accepted: 05/24/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE The suppression of pulmonary vessels in chest computed tomography (CT) images can enhance the conspicuity of lung nodules, thereby improving the detection rate of early lung cancer. This study aimed to develop two key techniques in vessel suppression, that is, segmentation and removal of pulmonary vessels while preserving the nodules. METHODS Pulmonary vessel segmentation and removal methods in CT images were developed. The vessel segmentation method used a framework of two cascaded convolutional neural networks (CNNs). A bi-class segmentation network was utilized in the first step to extract high-intensity structures, including both vessels and nonvascular tissues such as nodules. A tri-class segmentation network was employed in the second step to distinguish the vessels from nonvascular tissues (mainly nodules) and the lung parenchyma. In the vessel removal method, the voxels in the segmented vessels were replaced with randomly selected voxels from the surrounding lung parenchyma. The dataset in this study comprised 50 three-dimensional (3D) low-dose chest CT images. The labels for vessel and nodule segmentation were annotated with a semi automatic approach. The two cascaded networks for pulmonary vessel segmentation were trained with CT images of 40 cases and tested with CT images of ten cases. Pulmonary vessels were removed from the ten testing scans based on the predicted segmentation results. In addition to qualitative evaluation to the effects of segmentation and removal, the segmentation results were quantitatively evaluated using Dice coefficient (DICE), Jaccard index (JAC), and volumetric similarity (VS) and the removal results were evaluated using contrast-to-noise ratio (CNR). RESULTS In the first step of vessel segmentation, the mean DICE, JAC, and VS for high-intensity tissues, including both vessels and nodules, were 0.943, 0.893, and 0.991, respectively. In the second step, all the nodules were separated from the vessels, and the mean DICE, JAC, and VS for the vessels were 0.941, 0.890, and 0.991, respectively. After vessel removal, the mean CNR for nodules was improved from 4.23 (6.26 dB) to 6.95 (8.42 dB). CONCLUSIONS Quantitative and qualitative evaluations demonstrated that the proposed method achieved a high accuracy for pulmonary vessel segmentation and a good effect on pulmonary vessel suppression.
Collapse
Affiliation(s)
- Xiaomeng Gu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.,Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Jiyong Wang
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qiang Li
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China.,Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
30
|
Saha A, Grimm LJ, Ghate SV, Kim CE, Soo MS, Yoon SC, Mazurowski MA. Machine learning-based prediction of future breast cancer using algorithmically measured background parenchymal enhancement on high-risk screening MRI. J Magn Reson Imaging 2019; 50:456-464. [PMID: 30648316 DOI: 10.1002/jmri.26636] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Revised: 12/16/2018] [Accepted: 12/18/2018] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Preliminary work has demonstrated that background parenchymal enhancement (BPE) assessed by radiologists is predictive of future breast cancer in women undergoing high-risk screening MRI. Algorithmically assessed measures of BPE offer a more precise and reproducible means of measuring BPE than human readers and thus might improve the predictive performance of future cancer development. PURPOSE To determine if algorithmically extracted imaging features of BPE on screening breast MRI in high-risk women are associated with subsequent development of cancer. STUDY TYPE Case-control study. POPULATION In all, 133 women at high risk for developing breast cancer; 46 of these patients developed breast cancer subsequently over a follow-up period of 2 years. FIELD STRENGTH/SEQUENCE 5 T or 3.0 T T1 -weighted precontrast fat-saturated and nonfat-saturated sequences and postcontrast nonfat-saturated sequences. ASSESSMENT Automatic features of BPE were extracted with a computer algorithm. Subjective BPE scores from five breast radiologists (blinded to clinical outcomes) were also available. STATISTICAL TESTS Leave-one-out crossvalidation for a multivariate logistic regression model developed using the automatic features and receiver operating characteristic (ROC) analysis were performed to calculate the area under the curve (AUC). Comparison of automatic features and subjective features was performed using a generalized regression model and the P-value was obtained. Odds ratios for automatic and subjective features were compared. RESULTS The multivariate model discriminated patients who developed cancer from the patients who did not, with an AUC of 0.70 (95% confidence interval: 0.60-0.79, P < 0.001). The imaging features remained independently predictive of subsequent development of cancer (P < 0.003) when compared with the subjective BPE assessment of the readers. DATA CONCLUSION Automatically extracted BPE measurements may potentially be used to further stratify risk in patients undergoing high-risk screening MRI. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2019;50:456-464.
Collapse
Affiliation(s)
- Ashirbani Saha
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Lars J Grimm
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Sujata V Ghate
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Connie E Kim
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Mary S Soo
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Sora C Yoon
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Maciej A Mazurowski
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA.,Department of Electrical and Computer Eng., Duke University, Durham, North Carolina, USA.,Duke University Medical Physics Program, Durham, North Carolina, USA
| |
Collapse
|
31
|
Haddad CW, Drukker K, Gullett R, Carroll TJ, Christoforidis GA, Giger ML. Fuzzy c-means segmentation of major vessels in angiographic images of stroke. J Med Imaging (Bellingham) 2018; 5:014501. [PMID: 29322070 DOI: 10.1117/1.jmi.5.1.014501] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Accepted: 12/07/2017] [Indexed: 11/14/2022] Open
Abstract
Patients suffering from ischemic stroke develop varying degrees of pial arterial supply (PAS), which can affect patient response to reperfusion therapy and risk of hemorrhage. Since vessel segmentation may be an important part in identifying PAS, we present a fuzzy c-means (FCM) clustering method to segment major vessels in x-ray angiograms. Our approach consists of semiautomatic region of interest (ROI) delineation, separation of major vessels from capillary blush and/or background noise through FCM clustering, and identification of the major vessel category. This method was applied to a database of x-ray angiograms of 24 patients acquired at various frame rates. The ground truth for performance evaluation was the designation by an expert radiologist selecting image pixels as being vessel or nonvessel. From receiver operating characteristic (ROC) analysis, area under the ROC curve (AUC) was the performance metric in the task of distinguishing between major vessels and blush or background. When clustering data into three categories and performing FCM segmentation on each ROI separately, the AUC was 0.89 for the entire database and [Formula: see text] for all examined frame-rates. In conclusion, our method showed promising performance in identifying major vessels and is anticipated to become an integral part of automatic quantification of PAS.
Collapse
Affiliation(s)
- Christopher W Haddad
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Rebecca Gullett
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Timothy J Carroll
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | | | - Maryellen L Giger
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
32
|
Tan B, Hosseinaee Z, Bizheva K. Dense concentric circle scanning protocol for measuring pulsatile retinal blood flow in rats with Doppler optical coherence tomography. J Biomed Opt 2017; 22:1-4. [PMID: 29110446 DOI: 10.1117/1.jbo.22.11.110501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2017] [Accepted: 10/16/2017] [Indexed: 05/22/2023]
Abstract
The variability in the spatial orientation of retinal blood vessels near the optic nerve head (ONH) results in imprecision of the measured Doppler angle and therefore the pulsatile blood flow (BF), when those parameters are evaluated using Doppler OCT imaging protocols based on dual-concentric circular scans. Here, we utilized a dense concentric circle scanning protocol and evaluated its precision for measuring pulsatile retinal BF in rats for different numbers of the circular scans. An spectral domain optical coherence tomography (SD-OCT) system operating in the 1060-nm spectral range with image acquisition rate of 47,000 A-scans/s was used to acquire concentric circular scans centered at the rat's ONH, with diameters ranging from 0.8 to 1.0 mm. A custom, automatic blood vessel segmentation algorithm was used to track the spatial orientation of the retinal blood vessels in three dimensions, evaluate the spatially dependent Doppler angle and calculate more accurately the axial BF for each major retinal blood vessel. Metrics such as retinal BF, pulsatility index, and resistance index were evaluated for each and all of the major retinal blood vessels. The performance of the proposed dense concentric circle scanning protocols was compared with that of the dual-circle scanning protocol. Results showed a 3.8±2.2 deg difference in the Doppler angle calculation between the two approaches, which resulted in ∼7% difference in the calculated retinal BF.
Collapse
Affiliation(s)
- Bingyao Tan
- University of Waterloo, Department of Physics and Astronomy, Waterloo, Ontario, Canada
| | - Zohreh Hosseinaee
- University of Waterloo, Department of System Design Engineering, Waterloo, Ontario, Canada
| | - Kostadinka Bizheva
- University of Waterloo, Department of Physics and Astronomy, Waterloo, Ontario, Canada
- University of Waterloo, Department of System Design Engineering, Waterloo, Ontario, Canada
- University of Waterloo, School of Optometry, Waterloo, Ontario, Canada
| |
Collapse
|
33
|
Goceri E, Shah ZK, Gurcan MN. Vessel segmentation from abdominal magnetic resonance images: adaptive and reconstructive approach. Int J Numer Method Biomed Eng 2017; 33:e2811. [PMID: 27315322 DOI: 10.1002/cnm.2811] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 06/10/2016] [Accepted: 06/12/2016] [Indexed: 06/06/2023]
Abstract
The liver vessels, which have low signal and run next to brighter bile ducts, are difficult to segment from MR images. This study presents a fully automated and adaptive method to segment portal and hepatic veins on magnetic resonance images. In the proposed approach, segmentation of these vessels is achieved in four stages: (i) initial segmentation, (ii) refinement, (iii) reconstruction, and (iv) post-processing. In the initial segmentation stage, k-means clustering is used, the results of which are refined iteratively with linear contrast stretching algorithm in the next stage, generating a mask image. In the reconstruction stage, vessel regions are reconstructed with the marker image from the first stage and the mask image from the second stage. Experimental data sets include slices that show fat tissues, which have the same gray level values with vessels, outside the margin of the liver. These structures are removed in the last stage. Results show that the proposed approach is more efficient than other thresholding-based methods. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Evgin Goceri
- Department of Biomedical Informatics, College of Medicine, The Ohio State University, Columbus, OH, USA
| | - Zarine K Shah
- Department of Radiology, Wexner Medical Center, The Ohio State University, Columbus, OH, USA
| | - Metin N Gurcan
- Department of Biomedical Informatics, College of Medicine, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
34
|
Holm S, Russell G, Nourrit V, McLoughlin N. DR HAGIS-a fundus image database for the automatic extraction of retinal surface vessels from diabetic patients. J Med Imaging (Bellingham) 2017; 4:014503. [PMID: 28217714 DOI: 10.1117/1.jmi.4.1.014503] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2016] [Accepted: 01/16/2017] [Indexed: 11/14/2022] Open
Abstract
A database of retinal fundus images, the DR HAGIS database, is presented. This database consists of 39 high-resolution color fundus images obtained from a diabetic retinopathy screening program in the UK. The NHS screening program uses service providers that employ different fundus and digital cameras. This results in a range of different image sizes and resolutions. Furthermore, patients enrolled in such programs often display other comorbidities in addition to diabetes. Therefore, in an effort to replicate the normal range of images examined by grading experts during screening, the DR HAGIS database consists of images of varying image sizes and resolutions and four comorbidity subgroups: collectively defined as the diabetic retinopathy, hypertension, age-related macular degeneration, and Glaucoma image set (DR HAGIS). For each image, the vasculature has been manually segmented to provide a realistic set of images on which to test automatic vessel extraction algorithms. Modified versions of two previously published vessel extraction algorithms were applied to this database to provide some baseline measurements. A method based purely on the intensity of images pixels resulted in a mean segmentation accuracy of 95.83% ([Formula: see text]), whereas an algorithm based on Gabor filters generated an accuracy of 95.71% ([Formula: see text]).
Collapse
Affiliation(s)
- Sven Holm
- University of Manchester , Faculty of Biology, Medicine and Health, Division of Pharmacy and Optometry, Manchester, United Kingdom
| | - Greg Russell
- University of Manchester , Faculty of Biology, Medicine and Health, Division of Pharmacy and Optometry, Manchester, United Kingdom
| | - Vincent Nourrit
- Telecom Bretagne , Département d'Optique Technopôle Brest-Iroise, Brest, France
| | - Niall McLoughlin
- University of Manchester , Faculty of Biology, Medicine and Health, Division of Pharmacy and Optometry, Manchester, United Kingdom
| |
Collapse
|
35
|
Neumann JO, Giese H, Nagel AM, Biller A, Unterberg A, Meinzer HP. MR Angiography at 7T to Visualize Cerebrovascular Territories. J Neuroimaging 2016; 26:519-24. [PMID: 27074967 DOI: 10.1111/jon.12348] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2016] [Accepted: 02/28/2016] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND There is considerable amount of interindividual variability in the size and location of the vascular territories of the major brain arteries. More data are needed to assess the amount of variability and the possible implications for further research and patient care. Arterial spin labeling (ASL) magnetic resonance imaging has been applied in various forms to facilitate noninvasive imaging of cerebrovascular flow territories, but it requires the definition of the flow territory of interest prior to image acquisition. OBJECTIVE Assessing the vascular territories of the major brain territories by using ultra-high-field time-of-flight (TOF) magnetic resonance angiography. METHODS We have developed an alternative method to ASL by simulating cerebrovascular dye injections. Following bias field normalization and segmentation of the vessels from 7 Tesla TOF imaging, a virtual model of the arterial vessel tree was generated and a simulation of dye dispersion into the brain tissue was performed. RESULTS The results provided by our method are consistent with the data obtained by autoptic dye injection studies in 23 human beings by van der Zwan in 1993. CONCLUSION Further technical improvements in imaging and segmentation techniques will improve the accuracy of the method and will facilitate the delineation of flow territories after image acquisition on even smaller subtrees of the cerebral vasculature.
Collapse
Affiliation(s)
- Jan-Oliver Neumann
- Department of Neurosurgery, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Henrik Giese
- Department of Neurosurgery, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Armin M Nagel
- Department of Medical Physics in Radiology, German Cancer Research Center, Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Armin Biller
- Department of Neuroradiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Andreas Unterberg
- Department of Neurosurgery, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany
| | - Hans-Peter Meinzer
- Division Medical and Biological Informatics, German Cancer Research Center, Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| |
Collapse
|
36
|
Rasta SH, Partovi ME, Seyedarabi H, Javadzadeh A. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement. J Med Signals Sens 2015; 5:40-8. [PMID: 25709940 PMCID: PMC4335144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2014] [Accepted: 01/03/2015] [Indexed: 11/14/2022]
Abstract
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
Collapse
Affiliation(s)
- Seyed Hossein Rasta
- Department of Medical Physics, Faculty of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
- Department of Medical Bioengineering, Faculty of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Mahsa Eisazadeh Partovi
- Department of Medical Physics, Faculty of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Hadi Seyedarabi
- Department of Medical Bioengineering, Faculty of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
- Communication Engineering Department, Faculty of Electrical and Computer Engineering, University of Tabriz, Iran
| | - Alireza Javadzadeh
- Department of Ophthalmology, Nikookari Eye Hospital, Tabriz University of Medical Sciences, Tabriz, Iran
| |
Collapse
|
37
|
Zhou C, Chan HP, Chughtai A, Kuriakose J, Agarwal P, Kazerooni EA, Hadjiiski LM, Patel S, Wei J. Computerized analysis of coronary artery disease: performance evaluation of segmentation and tracking of coronary arteries in CT angiograms. Med Phys 2014; 41:081912. [PMID: 25086543 PMCID: PMC4111838 DOI: 10.1118/1.4890294] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2013] [Revised: 06/08/2014] [Accepted: 07/02/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The authors are developing a computer-aided detection system to assist radiologists in analysis of coronary artery disease in coronary CT angiograms (cCTA). This study evaluated the accuracy of the authors' coronary artery segmentation and tracking method which are the essential steps to define the search space for the detection of atherosclerotic plaques. METHODS The heart region in cCTA is segmented and the vascular structures are enhanced using the authors' multiscale coronary artery response (MSCAR) method that performed 3D multiscale filtering and analysis of the eigenvalues of Hessian matrices. Starting from seed points at the origins of the left and right coronary arteries, a 3D rolling balloon region growing (RBG) method that adapts to the local vessel size segmented and tracked each of the coronary arteries and identifies the branches along the tracked vessels. The branches are queued and subsequently tracked until the queue is exhausted. With Institutional Review Board approval, 62 cCTA were collected retrospectively from the authors' patient files. Three experienced cardiothoracic radiologists manually tracked and marked center points of the coronary arteries as reference standard following the 17-segment model that includes clinically significant coronary arteries. Two radiologists visually examined the computer-segmented vessels and marked the mistakenly tracked veins and noisy structures as false positives (FPs). For the 62 cases, the radiologists marked a total of 10191 center points on 865 visible coronary artery segments. RESULTS The computer-segmented vessels overlapped with 83.6% (8520/10191) of the center points. Relative to the 865 radiologist-marked segments, the sensitivity reached 91.9% (795/865) if a true positive is defined as a computer-segmented vessel that overlapped with at least 10% of the reference center points marked on the segment. When the overlap threshold is increased to 50% and 100%, the sensitivities were 86.2% and 53.4%, respectively. For the 62 test cases, a total of 55 FPs were identified by radiologist in 23 of the cases. CONCLUSIONS The authors' MSCAR-RBG method achieved high sensitivity for coronary artery segmentation and tracking. Studies are underway to further improve the accuracy for the arterial segments affected by motion artifacts, severe calcified and noncalcified soft plaques, and to reduce the false tracking of the veins and other noisy structures. Methods are also being developed to detect coronary artery disease along the tracked vessels.
Collapse
Affiliation(s)
- Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Aamer Chughtai
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Jean Kuriakose
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Prachi Agarwal
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Ella A Kazerooni
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | | | - Smita Patel
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| | - Jun Wei
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109
| |
Collapse
|
38
|
Vigneau-Roy N, Bernier M, Descoteaux M, Whittingstall K. Regional variations in vascular density correlate with resting-state and task-evoked blood oxygen level-dependent signal amplitude. Hum Brain Mapp 2013; 35:1906-20. [PMID: 23843266 DOI: 10.1002/hbm.22301] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2012] [Revised: 02/12/2013] [Accepted: 03/18/2013] [Indexed: 12/24/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) has become one of the primary tools used for noninvasively measuring brain activity in humans. For the most part, the blood oxygen level-dependent (BOLD) contrast is used, which reflects the changes in hemodynamics associated with active brain tissue. The main advantage of the BOLD signal is that it is relatively easy to measure and thus is often used as a proxy for comparing brain function across population groups (i.e., control vs. patient). However, it is particularly weighted toward veins whose structural architecture is known to vary considerably across the brain. This makes it difficult to interpret whether differences in BOLD between cortical areas reflect true differences in neural activity or vascular structure. We therefore investigated how regional variations of vascular density (VAD) relate to the amplitude of resting-state and task-evoked BOLD signals. To address this issue, we first developed an automated method for segmenting veins in images acquired with susceptibility-weighted imaging, allowing us to visualize the venous vascular tree across the brain. In 19 healthy subjects, we then applied voxel-based morphometry (VBM) to T1-weighted images and computed regional measures of gray matter density (GMD). We found that, independent of spatial scale, regional variations in resting-state and task-evoked fMRI amplitudes were better correlated to VAD compared to GMD. Using a general linear model (GLM), it was observed that the bulk of regional variance in resting-state activity could be modeled by VAD. Cortical areas whose resting-state activity was most suppressed by VAD correction included Cuneus, Precuneus, Culmen, and BA 9, 10, and 47. Taken together, our results suggest that resting-state BOLD signals are significantly related to the underlying structure of the brain vascular system. Calibrating resting BOLD activity by venous structure may result in a more accurate interpretation of differences observed between cortical areas and/or individuals.
Collapse
Affiliation(s)
- Nicolas Vigneau-Roy
- Department of Nuclear Medicine and Radiobiology, Faculty of Medicine and Health Science, Sherbrooke Molecular Imaging Center, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | | | | | | |
Collapse
|
39
|
Dizdaroğlu B, Ataer-Cansizoglu E, Kalpathy-Cramer J, Keck K, Chiang MF, Erdogmus D. Level Sets for Retinal Vasculature Segmentation Using Seeds from Ridges and Edges from Phase Maps. IEEE Int Workshop Mach Learn Signal Process 2012:1-6. [PMID: 24975694 DOI: 10.1109/mlsp.2012.6349730] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, we present a novel modification to level set based automatic retinal vasculature segmentation approaches. The method introduces ridge sample extraction for sampling the vasculature centerline and phase map based edge detection for accurate region boundary detection. Segmenting the vasculature in fundus images has been generally challenging for level set methods employing classical edge-detection methodologies. Furthermore, initialization with seed points determined by sampling vessel centerlines using ridge identification makes the method completely automated. The resulting algorithm is able to segment vasculature in fundus imagery accurately and automatically. Quantitative results supplemented with visual ones support this observation. The methodology could be applied to the broader class of vessel segmentation problems encountered in medical image analytics.
Collapse
Affiliation(s)
- Bekir Dizdaroğlu
- Computer Engineering Department, Karadeniz Technical University, Turkey ; Cognitive Systems Laboratory, Northeastern University, Boston MA, USA
| | | | | | - Katie Keck
- Department of Ophthalmology, Oregon Health & Science University, Portland OR, USA
| | - Michael F Chiang
- Department of Ophthalmology, Oregon Health & Science University, Portland OR, USA ; Department of Medical Informatics, Oregon Health & Science University, Portland, OR, USA
| | - Deniz Erdogmus
- Cognitive Systems Laboratory, Northeastern University, Boston MA, USA
| |
Collapse
|
40
|
Estépar RSJ, Ross JC, Krissian K, Schultz T, Washko GR, Kindlmann GL. COMPUTATIONAL VASCULAR MORPHOMETRY FOR THE ASSESSMENT OF PULMONARY VASCULAR DISEASE BASED ON SCALE-SPACE PARTICLES. Proc IEEE Int Symp Biomed Imaging 2012:1479-1482. [PMID: 23743962 DOI: 10.1109/isbi.2012.6235851] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We present a fully automatic computational vascular morphometry (CVM) approach for the clinical assessment of pulmonary vascular disease (PVD). The approach is based on the automatic extraction of the lung intraparenchymal vasculature using scale-space particles. Based on the detected features, we developed a set of image-based biomarkers for the assessment of the disease using the vessel radii estimation provided by the particle's scale. The biomarkers are based on the interrelation between vessel cross-section area and blood volume. We validate our vascular extraction method using simulated data with different complexity and we present results in 2,500 CT scans with different degrees of chronic obstructive pulmonary disease (COPD) severity. Results indicate that our CVM pipeline may track vascular remodeling present in COPD and it can be used in further clinical studies to assess the involvement of PVD in patient populations.
Collapse
|
41
|
Dehkordi MT, Sadri S, Doosthoseini A. A review of coronary vessel segmentation algorithms. J Med Signals Sens 2011; 1:49-54. [PMID: 22606658 PMCID: PMC3317762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Coronary heart disease has been one of the main threats to human health. Coronary angiography is taken as the gold standard; for the assessment of coronary artery disease. However, sometimes, the images are difficult to visually interpret because of the crossing and overlapping of vessels in the angiogram. Vessel extraction from X-ray angiograms has been a challenging problem for several years. There are several problems in the extraction of vessels, including: weak contrast between the coronary arteries and the background, unknown and easily deformable shape of the vessel tree, and strong overlapping shadows of the bones. In this article we investigate the coronary vessel extraction and enhancement techniques, and present capabilities of the most important algorithms concerning coronary vessel segmentation.
Collapse
Affiliation(s)
- Maryam Taghizadeh Dehkordi
- Department of Electrical and Engineering, Isfahan University of Technology, Isfahan, Iran
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Saeed Sadri
- Department of Electrical and Engineering, Isfahan University of Technology, Isfahan, Iran
- Medical Image and Signal Processing Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | | |
Collapse
|
42
|
Zhou C, Chan HP, Sahiner B, Hadjiiski LM, Chughtai A, Patel S, Wei J, Cascade PN, Kazerooni EA. Computer-aided detection of pulmonary embolism in computed tomographic pulmonary angiography (CTPA): performance evaluation with independent data sets. Med Phys 2009; 36:3385-96. [PMID: 19746771 PMCID: PMC2719495 DOI: 10.1118/1.3157102] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2008] [Revised: 05/21/2009] [Accepted: 05/21/2009] [Indexed: 11/07/2022] Open
Abstract
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigen-values of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) classifier with feature selection is designed to reduce false positives (FPs). Features that characterize the contrast, gray level, and size of PE are extracted as input predictor variables to the LDA classifier. With the IRB approval, 59 CTPA PE cases were collected retrospectively from the patient files (UM cases). With access permission, 69 CTPA PE cases were randomly selected from the data set of the prospective investigation of pulmonary embolism diagnosis (PIOPED) II clinical trial. Extensive lung parenchymal or pleural diseases were present in 22/59 UM and 26/69 PIOPED cases. Experienced thoracic radiologists manually marked 595 and 800 PE as the reference standards in the UM and PIOPED data sets, respectively. PE occlusion of arteries ranged from 5% to 100%, with PE located from the main pulmonary artery to the subsegmental artery levels. Of the 595 PE identified in the UM cases, 245 and 350 PE were located in the subsegmental arteries and the more proximal arteries, respectively. The detection performance was assessed by free response ROC (FROC) analysis. The FROC analysis indicated that the PE detection system could achieve an overall sensitivity of 80% at 18.9 FPs/case for the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases was 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases. The detection performance depended on the arterial level where the PE was located and on the percentage of occlusion. The sensitivity was lower for PE in the subsegmental arteries than in more proximal arteries and was lower for PE with less than 20% occlusion. The results indicate that the PE detection system achieves high sensitivity for PE detection on independent CTPA scans for both the PIOPED and UM data sets and demonstrate the potential that the automated PE detection approach can be generalized to unknown cases.
Collapse
Affiliation(s)
- Chuan Zhou
- Department of Radiology, University of Michigan, Med Inn Building C479, 1500 E. Medical Center Drive, Ann Arbor, Michigan 48109, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
43
|
Zhou C, Chan HP, Sahiner B, Hadjiiski LM, Chughtai A, Patel S, Wei J, Ge J, Cascade PN, Kazerooni EA. Automatic multiscale enhancement and segmentation of pulmonary vessels in CT pulmonary angiography images for CAD applications. Med Phys 2007; 34:4567-77. [PMID: 18196782 PMCID: PMC2742232 DOI: 10.1118/1.2804558] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The authors are developing a computerized pulmonary vessel segmentation method for a computer-aided pulmonary embolism (PE) detection system on computed tomographic pulmonary angiography (CTPA) images. Because PE only occurs inside pulmonary arteries, an automatic and accurate segmentation of the pulmonary vessels in 3D CTPA images is an essential step for the PE CAD system. To segment the pulmonary vessels within the lung, the lung regions are first extracted using expectation-maximization (EM) analysis and morphological operations. The authors developed a 3D multiscale filtering technique to enhance the pulmonary vascular structures based on the analysis of eigenvalues of the Hessian matrix at multiple scales. A new response function of the filter was designed to enhance all vascular structures including the vessel bifurcations and suppress nonvessel structures such as the lymphoid tissues surrounding the vessels. An EM estimation is then used to segment the vascular structures by extracting the high response voxels at each scale. The vessel tree is finally reconstructed by integrating the segmented vessels at all scales based on a "connected component" analysis. Two CTPA cases containing PEs were used to evaluate the performance of the system. One of these two cases also contained pleural effusion disease. Two experienced thoracic radiologists provided the gold standard of pulmonary vessels including both arteries and veins by manually tracking the arterial tree and marking the center of the vessels using a computer graphical user interface. The accuracy of vessel tree segmentation was evaluated by the percentage of the "gold standard" vessel center points overlapping with the segmented vessels. The results show that 96.2% (2398/2494) and 96.3% (1910/1984) of the manually marked center points in the arteries overlapped with segmented vessels for the case without and with other lung diseases. For the manually marked center points in all vessels including arteries and veins, the segmentation accuracy are 97.0% (4546/4689) and 93.8% (4439/4732) for the cases without and with other lung diseases, respectively. Because of the lack of ground truth for the vessels, in addition to quantitative evaluation of the vessel segmentation performance, visual inspection was conducted to evaluate the segmentation. The results demonstrate that vessel segmentation using our method can extract the pulmonary vessels accurately and is not degraded by PE occlusion to the vessels in these test cases.
Collapse
Affiliation(s)
- Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, Michigan 48109, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|