1
|
Li S, Zhang D, Li X, Ou C, An L, Xu Y, Yang W, Zhang Y, Cheng KT. Vessel-promoted OCT to OCTA image translation by heuristic contextual constraints. Med Image Anal 2024; 98:103311. [PMID: 39217674 DOI: 10.1016/j.media.2024.103311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 06/30/2024] [Accepted: 08/17/2024] [Indexed: 09/04/2024]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a crucial tool in the clinical screening of retinal diseases, allowing for accurate 3D imaging of blood vessels through non-invasive scanning. However, the hardware-based approach for acquiring OCTA images presents challenges due to the need for specialized sensors and expensive devices. In this paper, we introduce a novel method called TransPro, which can translate the readily available 3D Optical Coherence Tomography (OCT) images into 3D OCTA images without requiring any additional hardware modifications. Our TransPro method is primarily driven by two novel ideas that have been overlooked by prior work. The first idea is derived from a critical observation that the OCTA projection map is generated by averaging pixel values from its corresponding B-scans along the Z-axis. Hence, we introduce a hybrid architecture incorporating a 3D adversarial generative network and a novel Heuristic Contextual Guidance (HCG) module, which effectively maintains the consistency of the generated OCTA images between 3D volumes and projection maps. The second idea is to improve the vessel quality in the translated OCTA projection maps. As a result, we propose a novel Vessel Promoted Guidance (VPG) module to enhance the attention of network on retinal vessels. Experimental results on two datasets demonstrate that our TransPro outperforms state-of-the-art approaches, with relative improvements around 11.4% in MAE, 2.7% in PSNR, 2% in SSIM, 40% in VDE, and 9.1% in VDC compared to the baseline method. The code is available at: https://github.com/ustlsh/TransPro.
Collapse
Affiliation(s)
- Shuhan Li
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Dong Zhang
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen, China.
| | - Chubin Ou
- Weizhi Meditech (Foshan) Co., Ltd, China
| | - Lin An
- Guangdong Weiren Meditech Co., Ltd, China
| | - Yanwu Xu
- South China University of Technology, and Pazhou Lab, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, China
| | - Yanchun Zhang
- Department of Ophthalmology, Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), Affiliated People's Hospital of Northwest University, Xi'an, China
| | - Kwang-Ting Cheng
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
2
|
Cao L, Wang H, Kwapong WR, Xiong Z, Zhao Y, Liu G, Liu R, Liu J, Hu F, Wu B. Intracranial pressure affects retinal venular complexity in idiopathic intracranial hypertension: a retrospective observational study. BMC Neurol 2024; 24:402. [PMID: 39427135 PMCID: PMC11490018 DOI: 10.1186/s12883-024-03881-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 09/25/2024] [Indexed: 10/21/2024] Open
Abstract
BACKGROUND Increased intracranial pressure (ICP) in patients with idiopathic intracranial hypertension (IIH) affects the retinal microvasculature, which can be imaged and quantified by optical coherence tomography angiography (OCTA). We aimed to identify the mediating factor between ICP and OCTA parameters association in IIH patients. METHODS IIH patients with active intracranial hypertension were enrolled. OCTA imaging was performed after ICP measurement. We quantified the branching complexity of the retinal arterioles and venules from the superficial vascular complex of the OCTA image. Eyes of IIH patients were stratified into eyes with papilledema (IIH-P) and eyes without papilledema (IIH-WP). All participants underwent visual acuity (VA) examination. RESULTS One hundred and thirty-eight eyes from 70 IIH patients and 146 eyes from 73 controls were included. Compared to the control group, IIH patients and IIH-P had reduced arteriole complexity and increased venule complexity (p < 0.05). For IIH patients and IIH-P, increased retinal venule complexity correlated with increased ICP and reduced VA (p < 0.05); while decreased arteriole complexity only correlated with Frisen scores (p = 0.026). Papilledema mediated the effect (p < 0.001) between ICP and arteriole complexity while ICP had a direct effect (p < 0.001) on venule complexity. CONCLUSION Retinal venules imaged via OCTA may reflect ICP levels and may underpin the direct effect of increased ICP in IIH patients.
Collapse
Affiliation(s)
- Le Cao
- Department of Neurology, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang, Chengdu, 610041, China
| | - Hang Wang
- Department of Neurology, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang, Chengdu, 610041, China
| | - William Robert Kwapong
- Department of Neurology, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang, Chengdu, 610041, China
| | - Zhouwei Xiong
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- The Affiliated People's Hospital of Ningbo University, Ningbo, China
| | - Guina Liu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Rui Liu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Junfeng Liu
- Department of Neurology, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang, Chengdu, 610041, China
| | - Fayun Hu
- Department of Neurology, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang, Chengdu, 610041, China.
| | - Bo Wu
- Department of Neurology, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang, Chengdu, 610041, China.
| |
Collapse
|
3
|
Liu Q, Zhou F, Shen J, Xu J, Wan C, Xu X, Yan Z, Yao J. A fundus vessel segmentation method based on double skip connections combined with deep supervision. Front Cell Dev Biol 2024; 12:1477819. [PMID: 39430046 PMCID: PMC11487527 DOI: 10.3389/fcell.2024.1477819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 09/20/2024] [Indexed: 10/22/2024] Open
Abstract
Background Fundus vessel segmentation is vital for diagnosing ophthalmic diseases like central serous chorioretinopathy (CSC), diabetic retinopathy, and glaucoma. Accurate segmentation provides crucial vessel morphology details, aiding the early detection and intervention of ophthalmic diseases. However, current algorithms struggle with fine vessel segmentation and maintaining sensitivity in complex regions. Challenges also stem from imaging variability and poor generalization across multimodal datasets, highlighting the need for more advanced algorithms in clinical practice. Methods This paper aims to explore a new vessel segmentation method to alleviate the above problems. We propose a fundus vessel segmentation model based on a combination of double skip connections, deep supervision, and TransUNet, namely DS2TUNet. Initially, the original fundus images are improved through grayscale conversion, normalization, histogram equalization, gamma correction, and other preprocessing techniques. Subsequently, by utilizing the U-Net architecture, the preprocessed fundus images are segmented to obtain the final vessel information. Specifically, the encoder firstly incorporates the ResNetV1 downsampling, dilated convolution downsampling, and Transformer to capture both local and global features, which upgrades its vessel feature extraction ability. Then, the decoder introduces the double skip connections to facilitate upsampling and refine segmentation outcomes. Finally, the deep supervision module introduces multiple upsampling vessel features from the decoder into the loss function, so that the model can learn vessel feature representations more effectively and alleviate gradient vanishing during the training phase. Results Extensive experiments on publicly available multimodal fundus datasets such as DRIVE, CHASE_DB1, and ROSE-1 demonstrate that the DS2TUNet model attains F1-scores of 0.8195, 0.8362, and 0.8425, with Accuracy of 0.9664, 0.9741, and 0.9557, Sensitivity of 0.8071, 0.8101, and 0.8586, and Specificity of 0.9823, 0.9869, and 0.9713, respectively. Additionally, the model also exhibits excellent test performance on the clinical fundus dataset CSC, with F1-score of 0.7757, Accuracy of 0.9688, Sensitivity of 0.8141, and Specificity of 0.9801 based on the weight trained on the CHASE_DB1 dataset. These results comprehensively validate that the proposed method obtains good performance in fundus vessel segmentation, thereby aiding clinicians in the further diagnosis and treatment of fundus diseases in terms of effectiveness and feasibility.
Collapse
Affiliation(s)
- Qingyou Liu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Fen Zhou
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Xiangzhong Xu
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Jin Yao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
4
|
Liu W, Tian T, Wang L, Xu W, Li L, Li H, Zhao W, Tian S, Pan X, Deng Y, Gao F, Yang H, Wang X, Su R. DIAS: A dataset and benchmark for intracranial artery segmentation in DSA sequences. Med Image Anal 2024; 97:103247. [PMID: 38941857 DOI: 10.1016/j.media.2024.103247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 05/31/2024] [Accepted: 06/17/2024] [Indexed: 06/30/2024]
Abstract
The automated segmentation of Intracranial Arteries (IA) in Digital Subtraction Angiography (DSA) plays a crucial role in the quantification of vascular morphology, significantly contributing to computer-assisted stroke research and clinical practice. Current research primarily focuses on the segmentation of single-frame DSA using proprietary datasets. However, these methods face challenges due to the inherent limitation of single-frame DSA, which only partially displays vascular contrast, thereby hindering accurate vascular structure representation. In this work, we introduce DIAS, a dataset specifically developed for IA segmentation in DSA sequences. We establish a comprehensive benchmark for evaluating DIAS, covering full, weak, and semi-supervised segmentation methods. Specifically, we propose the vessel sequence segmentation network, in which the sequence feature extraction module effectively captures spatiotemporal representations of intravascular contrast, achieving intracranial artery segmentation in 2D+Time DSA sequences. For weakly-supervised IA segmentation, we propose a novel scribble learning-based image segmentation framework, which, under the guidance of scribble labels, employs cross pseudo-supervision and consistency regularization to improve the performance of the segmentation network. Furthermore, we introduce the random patch-based self-training framework, aimed at alleviating the performance constraints encountered in IA segmentation due to the limited availability of annotated DSA data. Our extensive experiments on the DIAS dataset demonstrate the effectiveness of these methods as potential baselines for future research and clinical applications. The dataset and code are publicly available at https://doi.org/10.5281/zenodo.11401368 and https://github.com/lseventeen/DIAS.
Collapse
Affiliation(s)
- Wentao Liu
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China.
| | - Tong Tian
- State Key Laboratory of Structural Analysis, Optimization and CAE Software for Industrial Equipment, School of Mechanics and Aerospace Engineering, Dalian University of Technology, Dalian, China
| | - Lemeng Wang
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Weijin Xu
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Lei Li
- Department of Interventional Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Haoyuan Li
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Wenyi Zhao
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Siyu Tian
- Ultrasonic Department, The Fourth Hospital of Hebei Medical University and Hebei Tumor Hospital, Shijiazhuang, China
| | - Xipeng Pan
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, China
| | - Yiming Deng
- Department of Interventional Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Feng Gao
- Department of Interventional Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.
| | - Huihua Yang
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China; School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, China.
| | - Xin Wang
- Department of Radiology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Ruisheng Su
- Department of Radiology & Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, The Netherlands; Medical Image Analysis group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
5
|
Tillmann A, Turgut F, Munk MR. Optical coherence tomography angiography in neovascular age-related macular degeneration: comprehensive review of advancements and future perspective. Eye (Lond) 2024:10.1038/s41433-024-03295-8. [PMID: 39147864 DOI: 10.1038/s41433-024-03295-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 06/09/2024] [Accepted: 08/05/2024] [Indexed: 08/17/2024] Open
Abstract
Optical coherence tomography angiography (OCTA) holds promise in enhancing the care of various retinal vascular diseases, including neovascular age-related macular degeneration (nAMD). Given nAMD's vascular nature and the distinct vasculature of macular neovascularization (MNV), detailed analysis is expected to gain significance. Research in artificial intelligence (AI) indicates that en-face OCTA views may offer superior predictive capabilities than spectral domain optical coherence tomography (SD-OCT) images, highlighting the necessity to identify key vascular parameters. Analyzing vasculature could facilitate distinguishing MNV subtypes and refining diagnosis. Future studies correlating OCTA parameters with clinical data might prompt a revised classification system. However, the combined utilization of qualitative and quantitative OCTA biomarkers to enhance the accuracy of diagnosing disease activity remains underdeveloped. Discrepancies persist regarding the optimal biomarker for indicating an active lesion, warranting comprehensive prospective studies for validation. AI holds potential in extracting valuable insights from the vast datasets within OCTA, enabling researchers and clinicians to fully exploit its OCTA imaging capabilities. Nevertheless, challenges pertaining to data quantity and quality pose significant obstacles to AI advancement in this field. As OCTA gains traction in clinical practice and data volume increases, AI-driven analysis is expected to further augment diagnostic capabilities.
Collapse
Affiliation(s)
- Anne Tillmann
- Augenarzt Praxisgemeinschaft Gutblick, Pfäffikon, Switzerland
| | - Ferhat Turgut
- Augenarzt Praxisgemeinschaft Gutblick, Pfäffikon, Switzerland
- Department of Ophthalmology, Stadtspital Zürich, Zürich, Switzerland
- Department of Ophthalmology, Semmelweis University, Budapest, Hungary
| | - Marion R Munk
- Augenarzt Praxisgemeinschaft Gutblick, Pfäffikon, Switzerland.
- Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, 3010, Bern, Switzerland.
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, IL, 60208, USA.
| |
Collapse
|
6
|
Xue J, Feng Z, Zeng L, Wang S, Zhou X, Xia J, Deng A. Soul: An OCTA dataset based on Human Machine Collaborative Annotation Framework. Sci Data 2024; 11:838. [PMID: 39095383 PMCID: PMC11297209 DOI: 10.1038/s41597-024-03665-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Accepted: 07/19/2024] [Indexed: 08/04/2024] Open
Abstract
Branch retinal vein occlusion (BRVO) is the most prevalent retinal vascular disease that constitutes a threat to vision due to increased venous pressure caused by venous effluent in the space, leading to impaired visual function. Optical Coherence Tomography Angiography (OCTA) is an innovative non-invasive technique that offers high-resolution three-dimensional structures of retinal blood vessels. Most publicly available datasets are collected from single visits with different patients, encompassing various eye diseases for distinct tasks and areas. Moreover, due to the intricate nature of eye structure, professional labeling not only relies on the expertise of doctors but also demands considerable time and effort. Therefore, we have developed a BRVO-focused dataset named Soul (Source of ocular vascular) and propose a human machine collaborative annotation framework (HMCAF) using scrambled retinal blood vessels data. Soul is categorized into 6 subsets based on injection frequency and follow-up duration. The dataset comprises original images, corresponding blood vessel labels, and clinical text information sheets which can be effectively utilized when combined with machine learning.
Collapse
Affiliation(s)
- Jingyan Xue
- School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
| | - Zhenhua Feng
- Department of Ophthalmology, the Affiliated hospital of Shandong Second Medical University, Weifang, 261000, China
| | - Lili Zeng
- School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China
| | - Shuna Wang
- Department of Ophthalmology, the Affiliated hospital of Shandong Second Medical University, Weifang, 261000, China
| | - Xuezhong Zhou
- School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China.
| | - Jianan Xia
- School of Computer Science and Technology, Beijing Jiaotong University, Beijing, 100044, China.
| | - Aijun Deng
- Department of Ophthalmology, the Affiliated hospital of Shandong Second Medical University, Weifang, 261000, China.
| |
Collapse
|
7
|
Ashayeri H, Jafarizadeh A, Yousefi M, Farhadi F, Javadzadeh A. Retinal imaging and Alzheimer's disease: a future powered by Artificial Intelligence. Graefes Arch Clin Exp Ophthalmol 2024; 262:2389-2401. [PMID: 38358524 DOI: 10.1007/s00417-024-06394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 01/22/2024] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative condition that primarily affects brain tissue. Because the retina and brain share the same embryonic origin, visual deficits have been reported in AD patients. Artificial Intelligence (AI) has recently received a lot of attention due to its immense power to process and detect image hallmarks and make clinical decisions (like diagnosis) based on images. Since retinal changes have been reported in AD patients, AI is being proposed to process images to predict, diagnose, and prognosis AD. As a result, the purpose of this review was to discuss the use of AI trained on retinal images of AD patients. According to previous research, AD patients experience retinal thickness and retinal vessel density changes, which can occasionally occur before the onset of the disease's clinical symptoms. AI and machine vision can detect and use these changes in the domains of disease prediction, diagnosis, and prognosis. As a result, not only have unique algorithms been developed for this condition, but also databases such as the Retinal OCTA Segmentation dataset (ROSE) have been constructed for this purpose. The achievement of high accuracy, sensitivity, and specificity in the classification of retinal images between AD and healthy groups is one of the major breakthroughs in using AI based on retinal images for AD. It is fascinating that researchers could pinpoint individuals with a positive family history of AD based on the properties of their eyes. In conclusion, the growing application of AI in medicine promises its future position in processing different aspects of patients with AD, but we need cohort studies to determine whether it can help to follow up with healthy persons at risk of AD for a quicker diagnosis or assess the prognosis of patients with AD.
Collapse
Affiliation(s)
- Hamidreza Ashayeri
- Neuroscience Research Center (NSRC), Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jafarizadeh
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Milad Yousefi
- Faculty of Mathematics, Statistics and Computer Sciences, University of Tabriz, Tabriz, Iran
| | - Fereshteh Farhadi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Alireza Javadzadeh
- Department of Ophthalmology, Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz, Iran.
| |
Collapse
|
8
|
Jebril H, Esengönül M, Bogunović H. Anomaly Detection in Optical Coherence Tomography Angiography (OCTA) with a Vector-Quantized Variational Auto-Encoder (VQ-VAE). Bioengineering (Basel) 2024; 11:682. [PMID: 39061764 PMCID: PMC11273395 DOI: 10.3390/bioengineering11070682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 06/29/2024] [Accepted: 07/02/2024] [Indexed: 07/28/2024] Open
Abstract
Optical coherence tomography angiography (OCTA) provides detailed information on retinal blood flow and perfusion. Abnormal retinal perfusion indicates possible ocular or systemic disease. We propose a deep learning-based anomaly detection model to identify such anomalies in OCTA. It utilizes two deep learning approaches. First, a representation learning with a Vector-Quantized Variational Auto-Encoder (VQ-VAE) followed by Auto-Regressive (AR) modeling. Second, it exploits epistemic uncertainty estimates from Bayesian U-Net employed to segment the vasculature on OCTA en face images. Evaluation on two large public datasets, DRAC and OCTA-500, demonstrates effective anomaly detection (an AUROC of 0.92 for the DRAC and an AUROC of 0.75 for the OCTA-500) and localization (a mean Dice score of 0.61 for the DRAC) on this challenging task. To our knowledge, this is the first work that addresses anomaly detection in OCTA.
Collapse
Affiliation(s)
- Hana Jebril
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, 1090 Vienna, Austria; (H.J.); (M.E.)
| | - Meltem Esengönül
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, 1090 Vienna, Austria; (H.J.); (M.E.)
| | - Hrvoje Bogunović
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, 1090 Vienna, Austria; (H.J.); (M.E.)
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, 1090 Vienna, Austria
| |
Collapse
|
9
|
Wang Y, Li H. A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. SENSORS (BASEL, SWITZERLAND) 2024; 24:4326. [PMID: 39001106 PMCID: PMC11244310 DOI: 10.3390/s24134326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/23/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of "few data, poor information", this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods.
Collapse
Affiliation(s)
- Yating Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Hongjun Li
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| |
Collapse
|
10
|
Hu D, Li H, Liu H, Oguz I. Domain generalization for retinal vessel segmentation via Hessian-based vector field. Med Image Anal 2024; 95:103164. [PMID: 38615431 DOI: 10.1016/j.media.2024.103164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/27/2024] [Accepted: 04/02/2024] [Indexed: 04/16/2024]
Abstract
Blessed by vast amounts of data, learning-based methods have achieved remarkable performance in countless tasks in computer vision and medical image analysis. Although these deep models can simulate highly nonlinear mapping functions, they are not robust with regard to the domain shift of input data. This is a significant concern that impedes the large-scale deployment of deep models in medical images since they have inherent variation in data distribution due to the lack of imaging standardization. Therefore, researchers have explored many domain generalization (DG) methods to alleviate this problem. In this work, we introduce a Hessian-based vector field that can effectively model the tubular shape of vessels, which is an invariant feature for data across various distributions. The vector field serves as a good embedding feature to take advantage of the self-attention mechanism in a vision transformer. We design paralleled transformer blocks that stress the local features with different scales. Furthermore, we present a novel data augmentation method that introduces perturbations in image style while the vessel structure remains unchanged. In experiments conducted on public datasets of different modalities, we show that our model achieves superior generalizability compared with the existing algorithms. Our code and trained model are publicly available at https://github.com/MedICL-VU/Vector-Field-Transformer.
Collapse
Affiliation(s)
- Dewei Hu
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Hao Li
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Han Liu
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Ipek Oguz
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA; Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| |
Collapse
|
11
|
Kreitner L, Paetzold JC, Rauch N, Chen C, Hagag AM, Fayed AE, Sivaprasad S, Rausch S, Weichsel J, Menze BH, Harders M, Knier B, Rueckert D, Menten MJ. Synthetic Optical Coherence Tomography Angiographs for Detailed Retinal Vessel Segmentation Without Human Annotations. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2061-2073. [PMID: 38224512 DOI: 10.1109/tmi.2024.3354408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.
Collapse
|
12
|
Nan Y, Ser JD, Tang Z, Tang P, Xing X, Fang Y, Herrera F, Pedrycz W, Walsh S, Yang G. Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7391-7404. [PMID: 37204954 DOI: 10.1109/tnnls.2023.3269223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases, while its manual delineation is unduly burdensome. To alleviate this time-consuming and potentially subjective manual procedure, researchers have proposed methods to automatically segment airways from computerized tomography (CT) images. However, some small-sized airway branches (e.g., bronchus and terminal bronchioles) significantly aggravate the difficulty of automatic segmentation by machine learning models. In particular, the variance of voxel values and the severe data imbalance in airway branches make the computational module prone to discontinuous and false-negative predictions, especially for cohorts with different lung diseases. The attention mechanism has shown the capacity to segment complex structures, while fuzzy logic can reduce the uncertainty in feature representations. Therefore, the integration of deep attention networks and fuzzy theory, given by the fuzzy attention layer, should be an escalated solution for better generalization and robustness. This article presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network (FANN) and a comprehensive loss function to enhance the spatial continuity of airway segmentation. The deep fuzzy set is formulated by a set of voxels in the feature map and a learnable Gaussian membership function. Different from the existing attention mechanism, the proposed channel-specific fuzzy attention addresses the issue of heterogeneous features in different channels. Furthermore, a novel evaluation metric is proposed to assess both the continuity and completeness of airway structures. The efficiency, generalization, and robustness of the proposed method have been proved by training on normal lung disease while testing on datasets of lung cancer, COVID-19, and pulmonary fibrosis.
Collapse
|
13
|
Li M, Huang K, Xu Q, Yang J, Zhang Y, Ji Z, Xie K, Yuan S, Liu Q, Chen Q. OCTA-500: A retinal dataset for optical coherence tomography angiography study. Med Image Anal 2024; 93:103092. [PMID: 38325155 DOI: 10.1016/j.media.2024.103092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 11/10/2023] [Accepted: 01/22/2024] [Indexed: 02/09/2024]
Abstract
Optical coherence tomography angiography (OCTA) is a novel imaging modality that has been widely utilized in ophthalmology and neuroscience studies to observe retinal vessels and microvascular systems. However, publicly available OCTA datasets remain scarce. In this paper, we introduce the largest and most comprehensive OCTA dataset dubbed OCTA-500, which contains OCTA imaging under two fields of view (FOVs) from 500 subjects. The dataset provides rich images and annotations including two modalities (OCT/OCTA volumes), six types of projections, four types of text labels (age/gender/eye/disease) and seven types of segmentation labels (large vessel/capillary/artery/vein/2D FAZ/3D FAZ/retinal layers). Then, we propose a multi-object segmentation task called CAVF, which integrates capillary segmentation, artery segmentation, vein segmentation, and FAZ segmentation under a unified framework. In addition, we optimize the 3D-to-2D image projection network (IPN) to IPN-V2 to serve as one of the segmentation baselines. Experimental results demonstrate that IPN-V2 achieves an about 10% mIoU improvement over IPN on CAVF task. Finally, we further study the impact of several dataset characteristics: the training set size, the model input (OCT/OCTA, 3D volume/2D projection), the baseline networks, and the diseases. The dataset and code are publicly available at: https://ieee-dataport.org/open-access/octa-500.
Collapse
Affiliation(s)
- Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Qiuzhuo Xu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Jiadong Yang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Keren Xie
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, NanJing 210029, China.
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, NanJing 210029, China.
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, NanJing 210029, China.
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| |
Collapse
|
14
|
Untracht GR, Durkee MS, Zhao M, Kwok-Cheung Lam A, Sikorski BL, Sarunic MV, Andersen PE, Sampson DD, Chen FK, Sampson DM. Towards standardising retinal OCT angiography image analysis with open-source toolbox OCTAVA. Sci Rep 2024; 14:5979. [PMID: 38472220 PMCID: PMC10933365 DOI: 10.1038/s41598-024-53501-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 02/01/2024] [Indexed: 03/14/2024] Open
Abstract
Quantitative assessment of retinal microvasculature in optical coherence tomography angiography (OCTA) images is important for studying, diagnosing, monitoring, and guiding the treatment of ocular and systemic diseases. However, the OCTA user community lacks universal and transparent image analysis tools that can be applied to images from a range of OCTA instruments and provide reliable and consistent microvascular metrics from diverse datasets. We present a retinal extension to the OCTA Vascular Analyser (OCTAVA) that addresses the challenges of providing robust, easy-to-use, and transparent analysis of retinal OCTA images. OCTAVA is a user-friendly, open-source toolbox that can analyse retinal OCTA images from various instruments. The toolbox delivers seven microvascular metrics for the whole image or subregions and six metrics characterising the foveal avascular zone. We validate OCTAVA using images collected by four commercial OCTA instruments demonstrating robust performance across datasets from different instruments acquired at different sites from different study cohorts. We show that OCTAVA delivers values for retinal microvascular metrics comparable to the literature and reduces their variation between studies compared to their commercial equivalents. By making OCTAVA publicly available, we aim to expand standardised research and thereby improve the reproducibility of quantitative analysis of retinal microvascular imaging. Such improvements will help to better identify more reliable and sensitive biomarkers of ocular and systemic diseases.
Collapse
Affiliation(s)
- Gavrielle R Untracht
- Department of Health Technology, Technical University of Denmark, 2800, Kongens Lyngby, Denmark
- School of Biosciences, The University of Surrey, Guildford, GU27XH, UK
| | | | - Mei Zhao
- Centre for Myopia Research, School of Optometry, Faculty of Health and Social Science, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
| | - Andrew Kwok-Cheung Lam
- Centre for Myopia Research, School of Optometry, Faculty of Health and Social Science, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
| | - Bartosz L Sikorski
- Department of Ophthalmology, Nicolaus Copernicus University, 85-090, Bydgoszcz, Poland
- International Center for Translational Eye Research (ICTER), Institute of Physical Chemistry, Polish Academy of Sciences, Kasprzaka 44/52, 01-224, Warsaw, Poland
| | - Marinko V Sarunic
- Department of Medical Physics and Biomedical Engineering, University College London, London, WC1E6BT, UK
- Institute of Ophthalmology, University College London, London, EC1V2PD, UK
| | - Peter E Andersen
- Department of Health Technology, Technical University of Denmark, 2800, Kongens Lyngby, Denmark
| | - David D Sampson
- School of Computer Science and Electronic Engineering, The University of Surrey, Guildford, GU27XH, UK
| | - Fred K Chen
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Perth, WA, 6009, Australia
- Department of Ophthalmology, Royal Perth Hospital, Perth, WA, 6000, Australia
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, VIC, 3002, Australia
| | - Danuta M Sampson
- School of Biosciences, The University of Surrey, Guildford, GU27XH, UK.
- Institute of Ophthalmology, University College London, London, EC1V2PD, UK.
- Centre for Ophthalmology and Visual Science (Incorporating Lions Eye Institute), The University of Western Australia, Perth, WA, 6009, Australia.
- Department of Optometry, School of Allied Health, The University of Western Australia, Perth, WA, 6009, Australia.
| |
Collapse
|
15
|
Shao Y, Zhou K, Zhang L. CSSNet: Cascaded spatial shift network for multi-organ segmentation. Comput Biol Med 2024; 170:107955. [PMID: 38215618 DOI: 10.1016/j.compbiomed.2024.107955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/26/2023] [Accepted: 01/01/2024] [Indexed: 01/14/2024]
Abstract
Multi-organ segmentation is vital for clinical diagnosis and treatment. Although CNN and its extensions are popular in organ segmentation, they suffer from the local receptive field. In contrast, MultiLayer-Perceptron-based models (e.g., MLP-Mixer) have a global receptive field. However, these MLP-based models employ fully connected layers with many parameters and tend to overfit on sample-deficient medical image datasets. Therefore, we propose a Cascaded Spatial Shift Network, CSSNet, for multi-organ segmentation. Specifically, we design a novel cascaded spatial shift block to reduce the number of model parameters and aggregate feature segments in a cascaded way for efficient and effective feature extraction. Then, we propose a feature refinement network to aggregate multi-scale features with location information, and enhance the multi-scale features along the channel and spatial axis to obtain a high-quality feature map. Finally, we employ a self-attention-based fusion strategy to focus on the discriminative feature information for better multi-organ segmentation performance. Experimental results on the Synapse (multiply organs) and LiTS (liver & tumor) datasets demonstrate that our CSSNet achieves promising segmentation performance compared with CNN, MLP, and Transformer models. The source code will be available at https://github.com/zkyseu/CSSNet.
Collapse
Affiliation(s)
- Yeqin Shao
- School of Transportation, Nantong University, Jiangsu, 226019, China.
| | - Kunyang Zhou
- School of Zhangjian, Nantong University, Jiangsu, 226019, China
| | - Lichi Zhang
- School of Biomedical Engineering, Shanghai Jiaotong University, Shanghai, 200240, China
| |
Collapse
|
16
|
Zhang Y, Yu M, Tong C, Zhao Y, Han J. CA-UNet Segmentation Makes a Good Ischemic Stroke Risk Prediction. Interdiscip Sci 2024; 16:58-72. [PMID: 37626263 DOI: 10.1007/s12539-023-00583-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 08/27/2023]
Abstract
Stroke is still the World's second major factor of death, as well as the third major factor of death and disability. Ischemic stroke is a type of stroke, in which early detection and treatment are the keys to preventing ischemic strokes. However, due to the limitation of privacy protection and labeling difficulties, there are only a few studies on the intelligent automatic diagnosis of stroke or ischemic stroke, and the results are unsatisfactory. Therefore, we collect some data and propose a 3D carotid Computed Tomography Angiography (CTA) image segmentation model called CA-UNet for fully automated extraction of carotid arteries. We explore the number of down-sampling times applicable to carotid segmentation and design a multi-scale loss function to resolve the loss of detailed features during the process of down-sampling. Moreover, based on CA-Unet, we propose an ischemic stroke risk prediction model to predict the risk in patients using their 3D CTA images, electronic medical records, and medical history. We have validated the efficacy of our segmentation model and prediction model through comparison tests. Our method can provide reliable diagnoses and results that benefit patients and medical professionals.
Collapse
Affiliation(s)
- Yuqi Zhang
- School of Computer Science and Engineering, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Mengbo Yu
- School of Computer Science and Engineering, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Chao Tong
- School of Computer Science and Engineering, Beihang University, Beijing, China.
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China.
| | - Yanqing Zhao
- Department of Interventional Radiology and Vascular Surgery, Peking University Third Hospital, Beijing, China
| | - Jintao Han
- Department of Interventional Radiology and Vascular Surgery, Peking University Third Hospital, Beijing, China
| |
Collapse
|
17
|
Xie J, Yi Q, Wu Y, Zheng Y, Liu Y, Macerollo A, Fu H, Xu Y, Zhang J, Behera A, Fan C, Frangi AF, Liu J, Lu Q, Qi H, Zhao Y. Deep segmentation of OCTA for evaluation and association of changes of retinal microvasculature with Alzheimer's disease and mild cognitive impairment. Br J Ophthalmol 2024; 108:432-439. [PMID: 36596660 PMCID: PMC10894818 DOI: 10.1136/bjo-2022-321399] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 12/17/2022] [Indexed: 01/05/2023]
Abstract
BACKGROUND Optical coherence tomography angiography (OCTA) enables fast and non-invasive high-resolution imaging of retinal microvasculature and is suggested as a potential tool in the early detection of retinal microvascular changes in Alzheimer's Disease (AD). We developed a standardised OCTA analysis framework and compared their extracted parameters among controls and AD/mild cognitive impairment (MCI) in a cross-section study. METHODS We defined and extracted geometrical parameters of retinal microvasculature at different retinal layers and in the foveal avascular zone (FAZ) from segmented OCTA images obtained using well-validated state-of-the-art deep learning models. We studied these parameters in 158 subjects (62 healthy control, 55 AD and 41 MCI) using logistic regression to determine their potential in predicting the status of our subjects. RESULTS In the AD group, there was a significant decrease in vessel area and length densities in the inner vascular complexes (IVC) compared with controls. The number of vascular bifurcations in AD is also significantly lower than that of healthy people. The MCI group demonstrated a decrease in vascular area, length densities, vascular fractal dimension and the number of bifurcations in both the superficial vascular complexes (SVC) and the IVC compared with controls. A larger vascular tortuosity in the IVC, and a larger roundness of FAZ in the SVC, can also be observed in MCI compared with controls. CONCLUSION Our study demonstrates the applicability of OCTA for the diagnosis of AD and MCI, and provides a standard tool for future clinical service and research. Biomarkers from retinal OCTA images can provide useful information for clinical decision-making and diagnosis of AD and MCI.
Collapse
Affiliation(s)
- Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Quanyong Yi
- Ningbo Eye Hospital, Ningbo, Zhejiang, China
| | - Yufei Wu
- Department of Ophthalmology, The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, UK
| | - Antonella Macerollo
- Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, UK
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Yanwu Xu
- Intelligent Healthcare Unit, Baidu Inc, Beijing, Haidian, China
| | - Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Ardhendu Behera
- Department of Computer Science, Edge Hill University, Ormskirk, UK
| | - Chenlei Fan
- Department of Neurology, The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | | | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, China
| | - Qinkang Lu
- Department of Ophthalmology, Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Hong Qi
- Ophthalmology, Peking University Third Hospital, Haidian, Beijing, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| |
Collapse
|
18
|
Yang C, Li B, Xiao Q, Bai Y, Li Y, Li Z, Li H, Li H. LA-Net: layer attention network for 3D-to-2D retinal vessel segmentation in OCTA images. Phys Med Biol 2024; 69:045019. [PMID: 38237179 DOI: 10.1088/1361-6560/ad2011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 01/18/2024] [Indexed: 02/10/2024]
Abstract
Objective.Retinal vessel segmentation from optical coherence tomography angiography (OCTA) volumes is significant in analyzing blood supply structures and the diagnosing ophthalmic diseases. However, accurate retinal vessel segmentation in 3D OCTA remains challenging due to the interference of choroidal blood flow signals and the variations in retinal vessel structure.Approach.This paper proposes a layer attention network (LA-Net) for 3D-to-2D retinal vessel segmentation. The network comprises a 3D projection path and a 2D segmentation path. The key component in the 3D path is the proposed multi-scale layer attention module, which effectively learns the layer features of OCT and OCTA to attend to the retinal vessel layer while suppressing the choroidal vessel layer. This module also efficiently captures 3D multi-scale information for improved semantic understanding during projection. In the 2D path, a reverse boundary attention module is introduced to explore and preserve boundary and shape features of retinal vessels by focusing on non-salient regions in deep features.Main results.Experimental results in two subsets of the OCTA-500 dataset showed that our method achieves advanced segmentation performance with Dice similarity coefficients of 93.04% and 89.74%, respectively.Significance.The proposed network provides reliable 3D-to-2D segmentation of retinal vessels, with potential for application in various segmentation tasks that involve projecting the input image. Implementation code:https://github.com/y8421036/LA-Net.
Collapse
Affiliation(s)
- Chaozhi Yang
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Bei Li
- Beijing Hospital, Institute of Geriatric Medicine, Chinese Academy of Medical Science, Beijing 100730, People's Republic of China
| | - Qian Xiao
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Yun Bai
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Yachuan Li
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Zongmin Li
- College of Computer Science and Technology, China University of Petroleum (East China), Qingdao 266580, People's Republic of China
| | - Hongyi Li
- Beijing Hospital, Institute of Geriatric Medicine, Chinese Academy of Medical Science, Beijing 100730, People's Republic of China
| | - Hua Li
- Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, People's Republic of China
| |
Collapse
|
19
|
Gouzou D, Taimori A, Haloubi T, Finlayson N, Wang Q, Hopgood JR, Vallejo M. Applications of machine learning in time-domain fluorescence lifetime imaging: a review. Methods Appl Fluoresc 2024; 12:022001. [PMID: 38055998 PMCID: PMC10851337 DOI: 10.1088/2050-6120/ad12f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/25/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Many medical imaging modalities have benefited from recent advances in Machine Learning (ML), specifically in deep learning, such as neural networks. Computers can be trained to investigate and enhance medical imaging methods without using valuable human resources. In recent years, Fluorescence Lifetime Imaging (FLIm) has received increasing attention from the ML community. FLIm goes beyond conventional spectral imaging, providing additional lifetime information, and could lead to optical histopathology supporting real-time diagnostics. However, most current studies do not use the full potential of machine/deep learning models. As a developing image modality, FLIm data are not easily obtainable, which, coupled with an absence of standardisation, is pushing back the research to develop models which could advance automated diagnosis and help promote FLIm. In this paper, we describe recent developments that improve FLIm image quality, specifically time-domain systems, and we summarise sensing, signal-to-noise analysis and the advances in registration and low-level tracking. We review the two main applications of ML for FLIm: lifetime estimation and image analysis through classification and segmentation. We suggest a course of action to improve the quality of ML studies applied to FLIm. Our final goal is to promote FLIm and attract more ML practitioners to explore the potential of lifetime imaging.
Collapse
Affiliation(s)
- Dorian Gouzou
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| | - Ali Taimori
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Tarek Haloubi
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Neil Finlayson
- Neil Finlayson is with Institute for Integrated Micro and Nano Systems, School of Engineering, University ofEdinburgh, Edinburgh EH9 3FF, United Kingdom
| | - Qiang Wang
- Qiang Wang is with Centre for Inflammation Research, University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom
| | - James R Hopgood
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Marta Vallejo
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| |
Collapse
|
20
|
Chen S, Fan J, Ding Y, Geng H, Ai D, Xiao D, Song H, Wang Y, Yang J. PEA-Net: A progressive edge information aggregation network for vessel segmentation. Comput Biol Med 2024; 169:107766. [PMID: 38150885 DOI: 10.1016/j.compbiomed.2023.107766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
Automatic vessel segmentation is a critical area of research in medical image analysis, as it can greatly assist doctors in accurately and efficiently diagnosing vascular diseases. However, accurately extracting the complete vessel structure from images remains a challenge due to issues such as uneven contrast and background noise. Existing methods primarily focus on segmenting individual pixels and often fail to consider vessel features and morphology. As a result, these methods often produce fragmented results and misidentify vessel-like background noise, leading to missing and outlier points in the overall segmentation. To address these issues, this paper proposes a novel approach called the progressive edge information aggregation network for vessel segmentation (PEA-Net). The proposed method consists of several key components. First, a dual-stream receptive field encoder (DRE) is introduced to preserve fine structural features and mitigate false positive predictions caused by background noise. This is achieved by combining vessel morphological features obtained from different receptive field sizes. Second, a progressive complementary fusion (PCF) module is designed to enhance fine vessel detection and improve connectivity. This module complements the decoding path by combining features from previous iterations and the DRE, incorporating nonsalient information. Additionally, segmentation-edge decoupling enhancement (SDE) modules are employed as decoders to integrate upsampling features with nonsalient information provided by the PCF. This integration enhances both edge and segmentation information. The features in the skip connection and decoding path are iteratively updated to progressively aggregate fine structure information, thereby optimizing segmentation results and reducing topological disconnections. Experimental results on multiple datasets demonstrate that the proposed PEA-Net model and strategy achieve optimal performance in both pixel-level and topology-level metrics.
Collapse
Affiliation(s)
- Sigeng Chen
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yang Ding
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Haixiao Geng
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Deqiang Xiao
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
21
|
Ma F, Li S, Wang S, Guo Y, Wu F, Meng J, Dai C. Deep-learning segmentation method for optical coherence tomography angiography in ophthalmology. JOURNAL OF BIOPHOTONICS 2024; 17:e202300321. [PMID: 37801660 DOI: 10.1002/jbio.202300321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 09/28/2023] [Accepted: 10/04/2023] [Indexed: 10/08/2023]
Abstract
PURPOSE The optic disc and the macular are two major anatomical structures in the human eye. Optic discs are associated with the optic nerve. Macular mainly involves degeneration and impaired function of the macular region. Reliable optic disc and macular segmentation are necessary for the automated screening of retinal diseases. METHODS A swept-source OCTA system was designed to capture OCTA images of human eyes. To address these segmentation tasks, first, we constructed a new Optic Disc and Macula in fundus Image with optical coherence tomography angiography (OCTA) dataset (ODMI). Second, we proposed a Coarse and Fine Attention-Based Network (CFANet). RESULTS The five metrics of our methods on ODMI are 98.91 % , 98.47 % , 89.77 % , 98.49 % , and 89.77 % , respectively. CONCLUSIONS Experimental results show that our CFANet has achieved good performance on segmentation for the optic disc and macula in OCTA.
Collapse
Affiliation(s)
- Fei Ma
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Sien Li
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Shengbo Wang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Yanfei Guo
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Fei Wu
- School of Automation, Nanjing University of Posts and Telecommunications, Jiangsu, China
| | - Jing Meng
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Cuixia Dai
- College Science, Shanghai Institute of Technology, Shanghai, China
| |
Collapse
|
22
|
Ge C, Yu X, Yuan M, Fan Z, Chen J, Shum PP, Liu L. Self-supervised Self2Self denoising strategy for OCT speckle reduction with a single noisy image. BIOMEDICAL OPTICS EXPRESS 2024; 15:1233-1252. [PMID: 38404302 PMCID: PMC10890874 DOI: 10.1364/boe.515520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/13/2024] [Accepted: 01/16/2024] [Indexed: 02/27/2024]
Abstract
Optical coherence tomography (OCT) inevitably suffers from the influence of speckles originating from multiple scattered photons owing to its low-coherence interferometry property. Although various deep learning schemes have been proposed for OCT despeckling, they typically suffer from the requirement for ground-truth images, which are difficult to collect in clinical practice. To alleviate the influences of speckles without requiring ground-truth images, this paper presents a self-supervised deep learning scheme, namely, Self2Self strategy (S2Snet), for OCT despeckling using a single noisy image. Specifically, in this study, the main deep learning architecture is the Self2Self network, with its partial convolution being updated with a gated convolution layer. Specifically, both the input images and their Bernoulli sampling instances are adopted as network input first, and then, a devised loss function is integrated into the network to remove the background noise. Finally, the denoised output is estimated using the average of multiple predicted outputs. Experiments with various OCT datasets are conducted to verify the effectiveness of the proposed S2Snet scheme. Results compared with those of the existing methods demonstrate that S2Snet not only outperforms those existing self-supervised deep learning methods but also achieves better performances than those non-deep learning ones in different cases. Specifically, S2Snet achieves an improvement of 3.41% and 2.37% for PSNR and SSIM, respectively, as compared to the original Self2Self network, while such improvements become 19.9% and 22.7% as compared with the well-known non-deep learning NWSR method.
Collapse
Affiliation(s)
- Chenkun Ge
- School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, China
| | - Xiaojun Yu
- School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, China
- Research & Development Institute of Northwestern Polytechnical University in Shenzhen, Shenzhen, Guangzhou, 51800, China
| | - Miao Yuan
- School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, China
| | - Zeming Fan
- School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, China
| | - Jinna Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, 518055, China
| | - Perry Ping Shum
- School of Automation, Northwestern Polytechnical University, Xi’an, Shaanxi, 710072, China
| | - Linbo Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University, 639798, Singapore
| |
Collapse
|
23
|
Li Z, Huang G, Zou B, Chen W, Zhang T, Xu Z, Cai K, Wang T, Sun Y, Wang Y, Jin K, Huang X. Segmentation of Low-Light Optical Coherence Tomography Angiography Images under the Constraints of Vascular Network Topology. SENSORS (BASEL, SWITZERLAND) 2024; 24:774. [PMID: 38339491 PMCID: PMC10856982 DOI: 10.3390/s24030774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/27/2023] [Accepted: 01/03/2024] [Indexed: 02/12/2024]
Abstract
Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed CMOS sensors. These challenges are particularly pronounced when diagnosing and classifying diseases such as branch vein occlusion (BVO). To address these issues, we have developed a novel network based on topological structure generation, which transitions from superficial to deep retinal layers to enhance OCTA segmentation accuracy. Our approach not only demonstrates improved performance through qualitative visual comparisons and quantitative metric analyses but also effectively mitigates artifacts caused by low-light OCTA, resulting in reduced noise and enhanced clarity of the images. Furthermore, our system introduces a structured methodology for classifying BVO diseases, bridging a critical gap in this field. The primary aim of these advancements is to elevate the quality of OCTA images and bolster the reliability of their segmentation. Initial evaluations suggest that our method holds promise for establishing robust, fine-grained standards in OCTA vascular segmentation and analysis.
Collapse
Affiliation(s)
- Zhi Li
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Gaopeng Huang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Binfeng Zou
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Wenhao Chen
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Tianyun Zhang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Zhaoyang Xu
- Department of Paediatrics, University of Cambridge, Cambridge CB2 1TN, UK;
| | - Kunyan Cai
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China;
| | - Tingyu Wang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Yaoqi Sun
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
- Lishui Institute, Hangzhou Dianzi University, Lishui 323000, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou 310018, China;
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310027, China;
| | - Xingru Huang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London E3 4BL, UK
| |
Collapse
|
24
|
Pradeep K, Jeyakumar V, Bhende M, Shakeel A, Mahadevan S. Artificial intelligence and hemodynamic studies in optical coherence tomography angiography for diabetic retinopathy evaluation: A review. Proc Inst Mech Eng H 2024; 238:3-21. [PMID: 38044619 DOI: 10.1177/09544119231213443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Diabetic retinopathy (DR) is a rapidly emerging retinal abnormality worldwide, which can cause significant vision loss by disrupting the vascular structure in the retina. Recently, optical coherence tomography angiography (OCTA) has emerged as an effective imaging tool for diagnosing and monitoring DR. OCTA produces high-quality 3-dimensional images and provides deeper visualization of retinal vessel capillaries and plexuses. The clinical relevance of OCTA in detecting, classifying, and planning therapeutic procedures for DR patients has been highlighted in various studies. Quantitative indicators obtained from OCTA, such as blood vessel segmentation of the retina, foveal avascular zone (FAZ) extraction, retinal blood vessel density, blood velocity, flow rate, capillary vessel pressure, and retinal oxygen extraction, have been identified as crucial hemodynamic features for screening DR using computer-aided systems in artificial intelligence (AI). AI has the potential to assist physicians and ophthalmologists in developing new treatment options. In this review, we explore how OCTA has impacted the future of DR screening and early diagnosis. It also focuses on how analysis methods have evolved over time in clinical trials. The future of OCTA imaging and its continued use in AI-assisted analysis is promising and will undoubtedly enhance the clinical management of DR.
Collapse
Affiliation(s)
- K Pradeep
- Department of Biomedical Engineering, Chennai Institute of Technology, Chennai, Tamil Nadu, India
| | - Vijay Jeyakumar
- Department of Biomedical Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, Tamil Nadu, India
| | - Muna Bhende
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Areeba Shakeel
- Vitreoretina Department, Sankara Nethralaya Medical Research Foundation, Chennai, Tamil Nadu, India
| | - Shriraam Mahadevan
- Department of Endocrinology, Sri Ramachandra Institute of Higher Education and Research, Chennai, Tamil Nadu, India
| |
Collapse
|
25
|
Li M, Huang K, Zeng C, Chen Q, Zhang W. Visualization and quantization of 3D retinal vessels in OCTA images. OPTICS EXPRESS 2024; 32:471-481. [PMID: 38175076 DOI: 10.1364/oe.504877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024]
Abstract
Optical coherence tomography angiography (OCTA) has been increasingly used in the analysis of ophthalmic diseases in recent years. Automatic vessel segmentation in 2D OCTA projection images is commonly used in clinical practice. However, OCTA provides a 3D volume of the retinal blood vessels with rich spatial distribution information, and it is incomplete to segment retinal vessels only in 2D projection images. Here, considering that it is difficult to manually label 3D vessels, we introduce a 3D vessel segmentation and reconstruction method for OCTA images with only 2D vessel labels. We implemented 3D vessel segmentation in the OCTA volume using a specially trained 2D vessel segmentation model. The 3D vessel segmentation results are further used to calculate 3D vessel parameters and perform 3D reconstruction. The experimental results on the public dataset OCTA-500 demonstrate that 3D vessel parameters have higher sensitivity to vascular alteration than 2D vessel parameters, which makes it meaningful for clinical analysis. The 3D vessel reconstruction provides vascular visualization in different retinal layers that can be used to monitor the development of retinal diseases. Finally, we also illustrate the use of 3D reconstruction results to determine the relationship between the location of arteries and veins.
Collapse
|
26
|
Wang T, Dai Q. SURVS: A Swin-Unet and game theory-based unsupervised segmentation method for retinal vessel. Comput Biol Med 2023; 166:107542. [PMID: 37826953 DOI: 10.1016/j.compbiomed.2023.107542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 09/02/2023] [Accepted: 09/28/2023] [Indexed: 10/14/2023]
Abstract
Medical images, especially intricate vascular structures, are costly and time-consuming to annotate manually. It is beneficial to investigate an unsupervised method for vessel segmentation, one that circumvents the manual annotation yet remains valuable for disease detection. In this study, we design an unsupervised retinal vessel segmentation model based on the Swin-Unet framework and game theory. First, we construct two extreme pseudo-mapping functions by changing the contrast of the images and obtain their corresponding pseudo-masks based the on binary segmentation method and mathematical morphology, then we prove that there exists a mapping function between pseudo-mappings such that its corresponding mask is closest to the ground true mask. To acquire the best-predicted mask, based on which, we second develop a model based on the Swin-Unet frame to solve the optimal mapping function, and introduce an Image Colorization proxy task to assist the learning of pixel-level feature representations. Third, since to the instability of two pseudo-masks, the predicted mask will inevitably have errors, inspired by the two-player, non-zero-sum, non-cooperative Neighbor's Collision game in game theory, a game filter is proposed in this paper to reduce the errors in the final predicted mask. Finally, we verify the effectiveness of the presented unsupervised retinal vessel segmentation model on DRIVE, STARE and CHASE_DB1 datasets, and extensive experiments show that has obvious advantages over image segmentation and conventional unsupervised models.
Collapse
Affiliation(s)
- Tianxiang Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
| | - Qun Dai
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China.
| |
Collapse
|
27
|
Ibrahim Y, Xie J, Macerollo A, Sardone R, Shen Y, Romano V, Zheng Y. A Systematic Review on Retinal Biomarkers to Diagnose Dementia from OCT/OCTA Images. J Alzheimers Dis Rep 2023; 7:1201-1235. [PMID: 38025800 PMCID: PMC10657718 DOI: 10.3233/adr-230042] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/22/2023] [Indexed: 12/01/2023] Open
Abstract
Background Traditional methods for diagnosing dementia are costly, time-consuming, and somewhat invasive. Since the retina shares significant anatomical similarities with the brain, retinal abnormalities detected via optical coherence tomography (OCT) and OCT angiography (OCTA) have been studied as a potential non-invasive diagnostic tool for neurodegenerative disorders; however, the most effective retinal changes remain a mystery to be unraveled in this review. Objective This study aims to explore the relationship between retinal abnormalities in OCT/OCTA images and cognitive decline as well as evaluating biomarkers' effectiveness in detecting neurodegenerative diseases. Methods A systematic search was conducted on PubMed, Web of Science, and Scopus until December 2022, resulted in 64 papers using agreed search keywords, and inclusion/exclusion criteria. Results The superior peripapillary retinal nerve fiber layer (pRNFL) is a trustworthy biomarker to identify most Alzheimer's disease (AD) cases; however, it is inefficient when dealing with mild AD and mild cognitive impairment (MCI). The global pRNFL (pRNFL-G) is another reliable biomarker to discriminate frontotemporal dementia from mild AD and healthy controls (HCs), moderate AD and MCI from HCs, as well as identifing pathological Aβ42/tau in cognitively healthy individuals. Conversely, pRNFL-G fails to realize mild AD and the progression of AD. The average pRNFL thickness variation is considered a viable biomarker to monitor the progression of AD. Finally, the superior and average pRNFL thicknesses are considered consistent for advanced AD but not for early/mild AD. Conclusions Retinal changes may indicate dementia, but further research is needed to confirm the most effective biomarkers for early and mild AD.
Collapse
Affiliation(s)
- Yehia Ibrahim
- Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK
| | - Jianyang Xie
- Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK
| | - Antonella Macerollo
- Department of Pharmacology and Therapeutics, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, Liverpool, UK
- Department of Neurology, The Walton Centre NHS Foundation Trust, Liverpool, UK
| | - Rodolfo Sardone
- Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK
- Statistics and Epidemiology Unit, Local Healthcare Authority of Taranto, Taranto, Italy
| | - Yaochun Shen
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK
| | - Vito Romano
- Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Yalin Zheng
- Department of Eye and Vision Sciences, University of Liverpool, Liverpool, UK
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| |
Collapse
|
28
|
Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023; 13:15325. [PMID: 37714881 PMCID: PMC10504307 DOI: 10.1038/s41598-023-42062-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 09/05/2023] [Indexed: 09/17/2023] Open
Abstract
Vessel segmentation in fundus images permits understanding retinal diseases and computing image-based biomarkers. However, manual vessel segmentation is a time-consuming process. Optical coherence tomography angiography (OCT-A) allows direct, non-invasive estimation of retinal vessels. Unfortunately, compared to fundus images, OCT-A cameras are more expensive, less portable, and have a reduced field of view. We present an automated strategy relying on generative adversarial networks to create vascular maps from fundus images without training using manual vessel segmentation maps. Further post-processing used for standard en face OCT-A allows obtaining a vessel segmentation map. We compare our approach to state-of-the-art vessel segmentation algorithms trained on manual vessel segmentation maps and vessel segmentations derived from OCT-A. We evaluate them from an automatic vascular segmentation perspective and as vessel density estimators, i.e., the most common imaging biomarker for OCT-A used in studies. Using OCT-A as a training target over manual vessel delineations yields improved vascular maps for the optic disc area and compares to the best-performing vessel segmentation algorithm in the macular region. This technique could reduce the cost and effort incurred when training vessel segmentation algorithms. To incentivize research in this field, we will make the dataset publicly available to the scientific community.
Collapse
Affiliation(s)
- Ivan Coronado
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Samiksha Pachade
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Emanuele Trucco
- VAMPIRE project, School of Science and Engineering (Computing), University of Dundee, Dundee, Scotland, UK
| | - Rania Abdelkhaleq
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Juntao Yan
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Sergio Salazar-Marioni
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Amanda Jagolino-Cole
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Mozhdeh Bahrainian
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison, Madison, WI, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, University of Wisconsin-Madison, Madison, WI, USA
| | - Sunil A Sheth
- McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX, USA
| | - Luca Giancardo
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA.
| |
Collapse
|
29
|
Hormel TT, Jia Y. OCT angiography and its retinal biomarkers [Invited]. BIOMEDICAL OPTICS EXPRESS 2023; 14:4542-4566. [PMID: 37791289 PMCID: PMC10545210 DOI: 10.1364/boe.495627] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/13/2023] [Accepted: 07/13/2023] [Indexed: 10/05/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a high-resolution, depth-resolved imaging modality with important applications in ophthalmic practice. An extension of structural OCT, OCTA enables non-invasive, high-contrast imaging of retinal and choroidal vasculature that are amenable to quantification. As such, OCTA offers the capability to identify and characterize biomarkers important for clinical practice and therapeutic research. Here, we review new methods for analyzing biomarkers and discuss new insights provided by OCTA.
Collapse
Affiliation(s)
- Tristan T. Hormel
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Yali Jia
- Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
30
|
Jiang M, Chiu B. A Dual-Stream Centerline-Guided Network for Segmentation of the Common and Internal Carotid Arteries From 3D Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2690-2705. [PMID: 37015114 DOI: 10.1109/tmi.2023.3263537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Segmentation of the carotid section encompassing the common carotid artery (CCA), the bifurcation and the internal carotid artery (ICA) from three-dimensional ultrasound (3DUS) is required to measure the vessel wall volume (VWV) and localized vessel-wall-plus-plaque thickness (VWT), shown to be sensitive to treatment effect. We proposed an approach to combine a centerline extraction network (CHG-Net) and a dual-stream centerline-guided network (DSCG-Net) to segment the lumen-intima (LIB) and media-adventitia boundaries (MAB) from 3DUS images. Correct arterial location is essential for successful segmentation of the carotid section encompassing the bifurcation. We addressed this challenge by using the arterial centerline to enhance the localization accuracy of the segmentation network. The CHG-Net was developed to generate a heatmap indicating high probability regions for the centerline location, which was then integrated with the 3DUS image by the DSCG-Net to generate the MAB and LIB. The DSCG-Net includes a scale-based and a spatial attention mechanism to fuse multi-level features extracted by the encoder, and a centerline heatmap reconstruction side-branch connected to the end of the encoder to increase the generalization ability of the network. Experiments involving 224 3DUS volumes produce a Dice similarity coefficient (DSC) of 95.8±1.9% and 92.3±5.4% for CCA MAB and LIB, respectively, and 93.2±4.4% and 89.0±10.0% for ICA MAB and LIB, respectively. Our approach outperformed four state-of-the-art 3D CNN models, even after their performances were boosted by centerline guidance. The efficiency afforded by the framework would allow it to be incorporated into the clinical workflow for improved quantification of plaque change.
Collapse
|
31
|
Shi T, Ding X, Zhou W, Pan F, Yan Z, Bai X, Yang X. Affinity Feature Strengthening for Accurate, Complete and Robust Vessel Segmentation. IEEE J Biomed Health Inform 2023; 27:4006-4017. [PMID: 37163397 DOI: 10.1109/jbhi.2023.3274789] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms. However, achieving high pixel-wise accuracy, complete topology structure and robustness to various contrast variations are critical and challenging, and most existing methods focus only on achieving one or two of these aspects. In this paper, we present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach. Specifically, we compute a multiscale affinity field for each pixel, capturing its semantic relationships with neighboring pixels in the predicted mask image. This field represents the local geometry of vessel segments of different sizes, allowing us to learn spatial- and scale-aware adaptive weights to strengthen vessel features. We evaluate our AFN on four different types of vascular datasets: X-ray angiography coronary vessel dataset (XCAD), portal vein dataset (PV), digital subtraction angiography cerebrovascular vessel dataset (DSA) and retinal vessel dataset (DRIVE). Extensive experimental results demonstrate that our AFN outperforms the state-of-the-art methods in terms of both higher accuracy and topological metrics, while also being more robust to various contrast changes.
Collapse
|
32
|
Zhang H, Yang J, Zheng C, Zhao S, Zhang A. Annotation-efficient learning for OCT segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3294-3307. [PMID: 37497504 PMCID: PMC10368022 DOI: 10.1364/boe.486276] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/29/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Deep learning has been successfully applied to OCT segmentation. However, for data from different manufacturers and imaging protocols, and for different regions of interest (ROIs), it requires laborious and time-consuming data annotation and training, which is undesirable in many scenarios, such as surgical navigation and multi-center clinical trials. Here we propose an annotation-efficient learning method for OCT segmentation that could significantly reduce annotation costs. Leveraging self-supervised generative learning, we train a Transformer-based model to learn the OCT imagery. Then we connect the trained Transformer-based encoder to a CNN-based decoder, to learn the dense pixel-wise prediction in OCT segmentation. These training phases use open-access data and thus incur no annotation costs, and the pre-trained model can be adapted to different data and ROIs without re-training. Based on the greedy approximation for the k-center problem, we also introduce an algorithm for the selective annotation of the target data. We verified our method on publicly-available and private OCT datasets. Compared to the widely-used U-Net model with 100% training data, our method only requires ∼10% of the data for achieving the same segmentation accuracy, and it speeds the training up to ∼3.5 times. Furthermore, our proposed method outperforms other potential strategies that could improve annotation efficiency. We think this emphasis on learning efficiency may help improve the intelligence and application penetration of OCT-based technologies.
Collapse
Affiliation(s)
- Haoran Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shiqing Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Aili Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
33
|
Huang KW, Yang YR, Huang ZH, Liu YY, Lee SH. Retinal Vascular Image Segmentation Using Improved UNet Based on Residual Module. Bioengineering (Basel) 2023; 10:722. [PMID: 37370653 DOI: 10.3390/bioengineering10060722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 06/01/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
In recent years, deep learning technology for clinical diagnosis has progressed considerably, and the value of medical imaging continues to increase. In the past, clinicians evaluated medical images according to their individual expertise. In contrast, the application of artificial intelligence technology for automatic analysis and diagnostic assistance to support clinicians in evaluating medical information more efficiently has become an important trend. In this study, we propose a machine learning architecture designed to segment images of retinal blood vessels based on an improved U-Net neural network model. The proposed model incorporates a residual module to extract features more effectively, and includes a full-scale skip connection to combine low level details with high-level features at different scales. The results of an experimental evaluation show that the model was able to segment images of retinal vessels accurately. The proposed method also outperformed several existing models on the benchmark datasets DRIVE and ROSE, including U-Net, ResUNet, U-Net3+, ResUNet++, and CaraNet.
Collapse
Affiliation(s)
- Ko-Wei Huang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
| | - Yao-Ren Yang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
| | - Zih-Hao Huang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
| | - Yi-Yang Liu
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 80778, Taiwan
- Department of Urology, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 83301, Taiwan
| | - Shih-Hsiung Lee
- Department of Intelligent Commerce, National Kaohsiung University of Science and Technology, Kaohsiung 82444, Taiwan
| |
Collapse
|
34
|
Kapsala Z, Pallikaris A, Tsilimbaris MK. Assessment of a Novel Semi-Automated Algorithm for the Quantification of the Parafoveal Capillary Network. Clin Ophthalmol 2023; 17:1661-1674. [PMID: 37313218 PMCID: PMC10259575 DOI: 10.2147/opth.s407695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 05/01/2023] [Indexed: 06/15/2023] Open
Abstract
Introduction We present a novel semi-automated computerized method for the detection and quantification of parafoveal capillary network (PCN) in fluorescein angiography (FA) images. Material and Methods An algorithm detecting the superficial parafoveal capillary bed in high-resolution grayscale FA images and creating a one-pixel-wide PCN skeleton was developed using MatLab software. In addition to PCN detection, capillary density and branch point density in two circular areas centered on the center of the foveal avascular zone of 500μm and 750μm radius was calculated by the algorithm. Three consecutive FA images with distinguishable PCN from 56 eyes from 56 subjects were used for analysis. Both manual and semi-automated detection of the PCN and branch points was performed and compared. Three different intensity thresholds were used for the PCN detection to optimize the method defined as mean(I)+0.05*SD(I), mean(I) and mean(I)-0.05*SD(I), where I is the grayscale intensity of each image and SD the standard deviation. Limits of agreement (LoA), intraclass correlation coefficient (ICC) and Pearson's correlation coefficient (r) were calculated. Results Using mean(I)-0.05*SD(I) as threshold the average difference in PCN density between semi-automated and manual method was 0.197 (0.316) deg-1 at 500μm radius and 0.409 (0.562) deg-1 at 750μm radius. The LoA were -0.421 to 0.817 and -0.693 to 1.510 deg-1, respectively. The average difference of branch point density between semi-automated and manual method was zero for both areas; LoA were -0.001 to 0.002 and -0.001 to 0.001 branch points/degrees2, respectively. The other two intensity thresholds provided wider LoA for both metrics. The semi-automated algorithm showed great repeatability (ICC>0.91 in the 500μm radius and ICC>0.84 in the 750μm radius) for both metrics. Conclusion This semi-automated algorithm seems to provide readings in agreement with those of manual capillary tracing in FA. Larger prospective studies are needed to confirm the utility of the algorithm in clinical practice.
Collapse
Affiliation(s)
- Zoi Kapsala
- Department of Neurology and Sensory Organs, Medical School, University of Crete, Heraklion, Greece
| | - Aristofanis Pallikaris
- Department of Neurology and Sensory Organs, Medical School, University of Crete, Heraklion, Greece
- Vardinoyiannion Eye Institute of Crete, Medical School, University of Crete, Heraklion, Greece
| | - Miltiadis K Tsilimbaris
- Department of Neurology and Sensory Organs, Medical School, University of Crete, Heraklion, Greece
- Vardinoyiannion Eye Institute of Crete, Medical School, University of Crete, Heraklion, Greece
| |
Collapse
|
35
|
Shi Z, Li Y, Zou H, Zhang X. TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4897. [PMID: 37430810 PMCID: PMC10223195 DOI: 10.3390/s23104897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/01/2023] [Accepted: 05/09/2023] [Indexed: 07/12/2023]
Abstract
Optical coherence tomography angiography (OCTA) provides a detailed visualization of the vascular system to aid in the detection and diagnosis of ophthalmic disease. However, accurately extracting microvascular details from OCTA images remains a challenging task due to the limitations of pure convolutional networks. We propose a novel end-to-end transformer-based network architecture called TCU-Net for OCTA retinal vessel segmentation tasks. To address the loss of vascular features of convolutional operations, an efficient cross-fusion transformer module is introduced to replace the original skip connection of U-Net. The transformer module interacts with the encoder's multiscale vascular features to enrich vascular information and achieve linear computational complexity. Additionally, we design an efficient channel-wise cross attention module to fuse the multiscale features and fine-grained details from the decoding stages, resolving the semantic bias between them and enhancing effective vascular information. This model has been evaluated on the dedicated Retinal OCTA Segmentation (ROSE) dataset. The accuracy values of TCU-Net tested on the ROSE-1 dataset with SVC, DVC, and SVC+DVC are 0.9230, 0.9912, and 0.9042, respectively, and the corresponding AUC values are 0.9512, 0.9823, and 0.9170. For the ROSE-2 dataset, the accuracy and AUC are 0.9454 and 0.8623, respectively. The experiments demonstrate that TCU-Net outperforms state-of-the-art approaches regarding vessel segmentation performance and robustness.
Collapse
Affiliation(s)
- Zidi Shi
- School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan 430077, China
| | - Yu Li
- School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan 430077, China
| | - Hua Zou
- School of Computer Science, Wuhan University, Wuhan 430072, China
| | - Xuedong Zhang
- School of Information Engineering, Tarim University, Alaer 843300, China
| |
Collapse
|
36
|
Tan X, Chen X, Meng Q, Shi F, Xiang D, Chen Z, Pan L, Zhu W. OCT 2Former: A retinal OCT-angiography vessel segmentation transformer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 233:107454. [PMID: 36921468 DOI: 10.1016/j.cmpb.2023.107454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 01/25/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal vessel segmentation plays an important role in the automatic retinal disease screening and diagnosis. How to segment thin vessels and maintain the connectivity of vessels are the key challenges of the retinal vessel segmentation task. Optical coherence tomography angiography (OCTA) is a noninvasive imaging technique that can reveal high-resolution retinal vessels. Aiming at make full use of its characteristic of high resolution, a new end-to-end transformer based network named as OCT2Former (OCT-a Transformer) is proposed to segment retinal vessel accurately in OCTA images. METHODS The proposed OCT2Former is based on encoder-decoder structure, which mainly includes dynamic transformer encoder and lightweight decoder. Dynamic transformer encoder consists of dynamic token aggregation transformer and auxiliary convolution branch, in which the multi-head dynamic token aggregation attention based dynamic token aggregation transformer is designed to capture the global retinal vessel context information from the first layer throughout the network and the auxiliary convolution branch is proposed to compensate for the lack of inductive bias of the transformer and assist in the efficient feature extraction. A convolution based lightweight decoder is proposed to decode features efficiently and reduce the complexity of the proposed OCT2Former. RESULTS The proposed OCT2Former is validated on three publicly available datasets i.e. OCTA-SS, ROSE-1, OCTA-500 (subset OCTA-6M and OCTA-3M). The Jaccard indexes of the proposed OCT2Former on these datasets are 0.8344, 0.7855, 0.8099 and 0.8513, respectively, outperforming the best convolution based network 1.43, 1.32, 0.75 and 1.46%, respectively. CONCLUSION The experimental results have demonstrated that the proposed OCT2Former can achieve competitive performance on retinal OCTA vessel segmentation tasks.
Collapse
Affiliation(s)
- Xiao Tan
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Xinjian Chen
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China; The State Key Laboratory of Radiation Medicine and Protection, Soochow University, Jiangsu, China
| | - Qingquan Meng
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Fei Shi
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Dehui Xiang
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Zhongyue Chen
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China
| | - Lingjiao Pan
- School of Electrical and Information Engineering, Jiangsu University of Technology, Jiangsu, China
| | - Weifang Zhu
- MIPAV Lab, the School of Electronic and Information Engineering, Soochow University, Jiangsu, China.
| |
Collapse
|
37
|
Transformer and convolutional based dual branch network for retinal vessel segmentation in OCTA images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
38
|
Jin S, Yu S, Peng J, Wang H, Zhao Y. A novel medical image segmentation approach by using multi-branch segmentation network based on local and global information synchronous learning. Sci Rep 2023; 13:6762. [PMID: 37185374 PMCID: PMC10127969 DOI: 10.1038/s41598-023-33357-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
In recent years, there have been several solutions to medical image segmentation, such as U-shaped structure, transformer-based network, and multi-scale feature learning method. However, their network parameters and real-time performance are often neglected and cannot segment boundary regions well. The main reason is that such networks have deep encoders, a large number of channels, and excessive attention to local information rather than global information, which is crucial to the accuracy of image segmentation. Therefore, we propose a novel multi-branch medical image segmentation network MBSNet. We first design two branches using a parallel residual mixer (PRM) module and dilate convolution block to capture the local and global information of the image. At the same time, a SE-Block and a new spatial attention module enhance the output features. Considering the different output features of the two branches, we adopt a cross-fusion method to effectively combine and complement the features between different layers. MBSNet was tested on five datasets ISIC2018, Kvasir, BUSI, COVID-19, and LGG. The combined results show that MBSNet is lighter, faster, and more accurate. Specifically, for a [Formula: see text] input, MBSNet's FLOPs is 10.68G, with an F1-Score of [Formula: see text] on the Kvasir test dataset, well above [Formula: see text] for UNet++ with FLOPs of 216.55G. We also use the multi-criteria decision making method TOPSIS based on F1-Score, IOU and Geometric-Mean (G-mean) for overall analysis. The proposed MBSNet model performs better than other competitive methods. Code is available at https://github.com/YuLionel/MBSNet .
Collapse
Affiliation(s)
- Shangzhu Jin
- Information Office, Chongqing University of Science and Technology, Chongqing, 401331, China
| | - Sheng Yu
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China.
| | - Jun Peng
- College of Mathematics, Physics and Data Science, Chongqing University of Science and Technology, Chongqing, 401331, China
| | - Hongyi Wang
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China
| | - Yan Zhao
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China
| |
Collapse
|
39
|
Arnould L, Meriaudeau F, Guenancia C, Germanese C, Delcourt C, Kawasaki R, Cheung CY, Creuzot-Garcher C, Grzybowski A. Using Artificial Intelligence to Analyse the Retinal Vascular Network: The Future of Cardiovascular Risk Assessment Based on Oculomics? A Narrative Review. Ophthalmol Ther 2023; 12:657-674. [PMID: 36562928 PMCID: PMC10011267 DOI: 10.1007/s40123-022-00641-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/09/2022] [Indexed: 12/24/2022] Open
Abstract
The healthcare burden of cardiovascular diseases remains a major issue worldwide. Understanding the underlying mechanisms and improving identification of people with a higher risk profile of systemic vascular disease through noninvasive examinations is crucial. In ophthalmology, retinal vascular network imaging is simple and noninvasive and can provide in vivo information of the microstructure and vascular health. For more than 10 years, different research teams have been working on developing software to enable automatic analysis of the retinal vascular network from different imaging techniques (retinal fundus photographs, OCT angiography, adaptive optics, etc.) and to provide a description of the geometric characteristics of its arterial and venous components. Thus, the structure of retinal vessels could be considered a witness of the systemic vascular status. A new approach called "oculomics" using retinal image datasets and artificial intelligence algorithms recently increased the interest in retinal microvascular biomarkers. Despite the large volume of associated research, the role of retinal biomarkers in the screening, monitoring, or prediction of systemic vascular disease remains uncertain. A PubMed search was conducted until August 2022 and yielded relevant peer-reviewed articles based on a set of inclusion criteria. This literature review is intended to summarize the state of the art in oculomics and cardiovascular disease research.
Collapse
Affiliation(s)
- Louis Arnould
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France. .,University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France.
| | - Fabrice Meriaudeau
- Laboratory ImViA, IFTIM, Université Bourgogne Franche-Comté, 21078, Dijon, France
| | - Charles Guenancia
- Pathophysiology and Epidemiology of Cerebro-Cardiovascular Diseases, (EA 7460), Faculty of Health Sciences, Université de Bourgogne Franche-Comté, Dijon, France.,Cardiology Department, Dijon University Hospital, Dijon, France
| | - Clément Germanese
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France
| | - Cécile Delcourt
- University of Bordeaux, Inserm, Bordeaux Population Health Research Center, UMR U1219, 33000, Bordeaux, France
| | - Ryo Kawasaki
- Artificial Intelligence Center for Medical Research and Application, Osaka University Hospital, Osaka, Japan
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Catherine Creuzot-Garcher
- Ophthalmology Department, Dijon University Hospital, 14 Rue Paul Gaffarel, 21079, Dijon CEDEX, France.,Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, Dijon, France
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland.,Institute for Research in Ophthalmology, Poznan, Poland
| |
Collapse
|
40
|
A novel multi-attention, multi-scale 3D deep network for coronary artery segmentation. Med Image Anal 2023; 85:102745. [PMID: 36630869 DOI: 10.1016/j.media.2023.102745] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 12/13/2022] [Accepted: 01/05/2023] [Indexed: 01/11/2023]
Abstract
Automatic segmentation of coronary arteries provides vital assistance to enable accurate and efficient diagnosis and evaluation of coronary artery disease (CAD). However, the task of coronary artery segmentation (CAS) remains highly challenging due to the large-scale variations exhibited by coronary arteries, their complicated anatomical structures and morphologies, as well as the low contrast between vessels and their background. To comprehensively tackle these challenges, we propose a novel multi-attention, multi-scale 3D deep network for CAS, which we call CAS-Net. Specifically, we first propose an attention-guided feature fusion (AGFF) module to efficiently fuse adjacent hierarchical features in the encoding and decoding stages to capture more effectively latent semantic information. Then, we propose a scale-aware feature enhancement (SAFE) module, aiming to dynamically adjust the receptive fields to extract more expressive features effectively, thereby enhancing the feature representation capability of the network. Furthermore, we employ the multi-scale feature aggregation (MSFA) module to learn a more distinctive semantic representation for refining the vessel maps. In addition, considering that the limited training data annotated with a quality golden standard are also a significant factor restricting the development of CAS, we construct a new dataset containing 119 cases consisting of coronary computed tomographic angiography (CCTA) volumes and annotated coronary arteries. Extensive experiments on our self-collected dataset and three publicly available datasets demonstrate that the proposed method has good segmentation performance and generalization ability, outperforming multiple state-of-the-art algorithms on various metrics. Compared with U-Net3D, the proposed method significantly improves the Dice similarity coefficient (DSC) by at least 4% on each dataset, due to the synergistic effect among the three core modules, AGFF, SAFE, and MSFA. Our implementation is released at https://github.com/Cassie-CV/CAS-Net.
Collapse
|
41
|
Optical Coherence Tomography Angiography of the Intestine: How to Prevent Motion Artifacts in Open and Laparoscopic Surgery? Life (Basel) 2023; 13:life13030705. [PMID: 36983861 PMCID: PMC10055682 DOI: 10.3390/life13030705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/25/2023] [Accepted: 02/28/2023] [Indexed: 03/08/2023] Open
Abstract
(1) Introduction. The problem that limits the intraoperative use of OCTA for the intestinal circulation diagnostics is the low informative value of OCTA images containing too many motion artifacts. The aim of this study is to evaluate the efficiency and safety of the developed unit for the prevention of the appearance of motion artifacts in the OCTA images of the intestine in both open and laparoscopic surgery in the experiment; (2) Methods. A high-speed spectral-domain multimodal optical coherence tomograph (IAP RAS, Russia) operating at a wavelength of 1310 nm with a spectral width of 100 μm and a power of 2 mW was used. The developed unit was tested in two groups of experimental animals—on minipigs (group I, n = 10, open abdomen) and on rabbits (group II, n = 10, laparoscopy). Acute mesenteric ischemia was modeled and then 1 h later the small intestine underwent OCTA evaluation. A total of 400 OCTA images of the intact and ischemic small intestine were obtained and analyzed. The quality of the obtained OCTA images was evaluated based on the score proposed in 2020 by the group of Magnin M. (3) Results. Without stabilization, OCTA images of the intestine tissues were informative only in 32–44% of cases in open surgery and in 14–22% of cases in laparoscopic surgery. A vacuum bowel stabilizer with a pressure deficit of 22–25 mm Hg significantly reduced the number of motion artifacts. As a result, the proportion of informative OCTA images in open surgery increased up to 86.5% (Χ2 = 200.2, p = 0.001), and in laparoscopy up to 60% (Χ2 = 148.3, p = 0.001). (4) Conclusions. The used vacuum tissue stabilizer enabled a significant increase in the proportion of informative OCTA images by significantly reducing the motion artifacts.
Collapse
|
42
|
Hu D, Pan L, Chen X, Xiao S, Wu Q. A novel vessel segmentation algorithm for pathological en-face images based on matched filter. Phys Med Biol 2023; 68. [PMID: 36745931 DOI: 10.1088/1361-6560/acb98a] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 02/06/2023] [Indexed: 02/08/2023]
Abstract
The vascular information in fundus images can provide important basis for detection and prediction of retina-related diseases. However, the presence of lesions such as Coroidal Neovascularization can seriously interfere with normal vascular areas in optical coherence tomography (OCT) fundus images. In this paper, a novel method is proposed for detecting blood vessels in pathological OCT fundus images. First of all, an automatic localization and filling method is used in preprocessing step to reduce pathological interference. Afterwards, in terms of vessel extraction, a pore ablation method based on capillary bundle model is applied. The ablation method processes the image after matched filter feature extraction, which can eliminate the interference caused by diseased blood vessels to a great extent. At the end of the proposed method, morphological operations are used to obtain the main vascular features. Experimental results on the dataset show that the proposed method achieves 0.88 ± 0.03, 0.79 ± 0.05, 0.66 ± 0.04, results in DICE, PRECISION and TPR, respectively. Effective extraction of vascular information from OCT fundus images is of great significance for the diagnosis and treatment of retinal related diseases.
Collapse
Affiliation(s)
- Derong Hu
- School of Mechanical Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Lingjiao Pan
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, Suzhou, People's Republic of China
| | - Shuyan Xiao
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| | - Quanyu Wu
- School of Electrical and Information Engineering, Jiangsu University of Technology, Changzhou, People's Republic of China
| |
Collapse
|
43
|
Cao J, Xu Z, Xu M, Ma Y, Zhao Y. A two-stage framework for optical coherence tomography angiography image quality improvement. Front Med (Lausanne) 2023; 10:1061357. [PMID: 36756179 PMCID: PMC9899819 DOI: 10.3389/fmed.2023.1061357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 01/02/2023] [Indexed: 01/24/2023] Open
Abstract
Introduction Optical Coherence Tomography Angiography (OCTA) is a new non-invasive imaging modality that gains increasing popularity for the observation of the microvasculatures in the retina and the conjunctiva, assisting clinical diagnosis and treatment planning. However, poor imaging quality, such as stripe artifacts and low contrast, is common in the acquired OCTA and in particular Anterior Segment OCTA (AS-OCTA) due to eye microtremor and poor illumination conditions. These issues lead to incomplete vasculature maps that in turn makes it hard to make accurate interpretation and subsequent diagnosis. Methods In this work, we propose a two-stage framework that comprises a de-striping stage and a re-enhancing stage, with aims to remove stripe noise and to enhance blood vessel structure from the background. We introduce a new de-striping objective function in a Stripe Removal Net (SR-Net) to suppress the stripe noise in the original image. The vasculatures in acquired AS-OCTA images usually exhibit poor contrast, so we use a Perceptual Structure Generative Adversarial Network (PS-GAN) to enhance the de-striped AS-OCTA image in the re-enhancing stage, which combined cyclic perceptual loss with structure loss to achieve further image quality improvement. Results and discussion To evaluate the effectiveness of the proposed method, we apply the proposed framework to two synthetic OCTA datasets and a real AS-OCTA dataset. Our results show that the proposed framework yields a promising enhancement performance, which enables both conventional and deep learning-based vessel segmentation methods to produce improved results after enhancement of both retina and AS-OCTA modalities.
Collapse
Affiliation(s)
- Juan Cao
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China
| | - Zihao Xu
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Mengjia Xu
- Affiliated Cixi Hospital, Wenzhou Medical University, Ningbo, China,*Correspondence: Mengjia Xu ✉
| | - Yuhui Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China,Yuhui Ma ✉
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
44
|
Deep Learning in Optical Coherence Tomography Angiography: Current Progress, Challenges, and Future Directions. Diagnostics (Basel) 2023; 13:diagnostics13020326. [PMID: 36673135 PMCID: PMC9857993 DOI: 10.3390/diagnostics13020326] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 01/18/2023] Open
Abstract
Optical coherence tomography angiography (OCT-A) provides depth-resolved visualization of the retinal microvasculature without intravenous dye injection. It facilitates investigations of various retinal vascular diseases and glaucoma by assessment of qualitative and quantitative microvascular changes in the different retinal layers and radial peripapillary layer non-invasively, individually, and efficiently. Deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, has been applied in OCT-A image analysis in recent years and achieved good performance for different tasks, such as image quality control, segmentation, and classification. DL technologies have further facilitated the potential implementation of OCT-A in eye clinics in an automated and efficient manner and enhanced its clinical values for detecting and evaluating various vascular retinopathies. Nevertheless, the deployment of this combination in real-world clinics is still in the "proof-of-concept" stage due to several limitations, such as small training sample size, lack of standardized data preprocessing, insufficient testing in external datasets, and absence of standardized results interpretation. In this review, we introduce the existing applications of DL in OCT-A, summarize the potential challenges of the clinical deployment, and discuss future research directions.
Collapse
|
45
|
Lu K, Kwapong WR, Jiang S, Zhang X, Xie J, Ye C, Yan Y, Cao L, Zhao Y, Wu B. Differences in retinal microvasculature between large artery atherosclerosis and small artery disease: an optical coherence tomography angiography study. Front Aging Neurosci 2022; 14:1053638. [PMID: 36620764 PMCID: PMC9816383 DOI: 10.3389/fnagi.2022.1053638] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 11/14/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose: Recent reports suggest retinal microvasculature mirror cerebral microcirculation. Using optical coherence tomography angiography (OCTA), we investigated the retinal microvasculature differences between ischemic stroke patients with large artery atherosclerosis (LAA) and small artery disease (SAD). Methods: All patients underwent MR imaging and were classified as SAD and LAA; LAA was subdivided into anterior LAS and posterior LAS depending on the location. Swept-source OCTA (SS-OCTA) was used to image and segment the retina into the superficial vascular complex (SVC) and deep vascular complex (DVC) in a 6 × 6 mm area around the fovea. A deep learning algorithm was used to assess the vessel area density (VAD, %) in the retinal microvasculature. Results: Fifty-eight (mean age = 60.26 ± 10.88 years; 81.03% males) were LAA while 64 (mean age = 55.58 ± 10.34 years; 85.94% males) were SAD. LAS patients had significantly reduced VAD in the DVC (P = 0.022) compared to SAD patients; the VAD in the SVC did not show any significant difference between the two groups (P = 0.580). Anterior LAA ischemic stroke showed significantly lower VAD (P = 0.002) in the SVC compared with posterior LAS patients. There was no significant difference in the DVC between the two groups (P = 0.376). Conclusions: We found LAA patients had significantly reduced DVC density compared with SAD; we also showed anterior LAA patients had significantly reduced SVC density compared with posterior LAA. These findings suggest retinal imaging has the potential to be used to detect microvasculature changes in subtypes of ischemic stroke.
Collapse
Affiliation(s)
- Kun Lu
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | | | - Shuai Jiang
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | - Xuening Zhang
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | - Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Chen Ye
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | - Yuying Yan
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | - Le Cao
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China,The Affiliated People’s Hospital of Ningbo University, Ningbo, China,*Correspondence: Bo Wu Yitian Zhao
| | - Bo Wu
- Department of Neurology, West China Hospital, Sichuan University, Chengdu, China,*Correspondence: Bo Wu Yitian Zhao
| |
Collapse
|
46
|
Ma Z, Feng D, Wang J, Ma H. Retinal OCTA Image Segmentation Based on Global Contrastive Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:9847. [PMID: 36560216 PMCID: PMC9781437 DOI: 10.3390/s22249847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/11/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
The automatic segmentation of retinal vessels is of great significance for the analysis and diagnosis of retinal related diseases. However, the imbalanced data in retinal vascular images remain a great challenge. Current image segmentation methods based on deep learning almost always focus on local information in a single image while ignoring the global information of the entire dataset. To solve the problem of data imbalance in optical coherence tomography angiography (OCTA) datasets, this paper proposes a medical image segmentation method (contrastive OCTA segmentation net, COSNet) based on global contrastive learning. First, the feature extraction module extracts the features of OCTA image input and maps them to the segment head and the multilayer perceptron (MLP) head, respectively. Second, a contrastive learning module saves the pixel queue and pixel embedding of each category in the feature map into the memory bank, generates sample pairs through a mixed sampling strategy to construct a new contrastive loss function, and forces the network to learn local information and global information simultaneously. Finally, the segmented image is fine tuned to restore positional information of deep vessels. The experimental results show the proposed method can improve the accuracy (ACC), the area under the curve (AUC), and other evaluation indexes of image segmentation compared with the existing methods. This method could accomplish segmentation tasks in imbalanced data and extend to other segmentation tasks.
Collapse
Affiliation(s)
- Ziping Ma
- College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
| | - Dongxiu Feng
- College of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
| | - Jingyu Wang
- College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
| | - Hu Ma
- College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
| |
Collapse
|
47
|
Pachade S, Coronado I, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino A, Green C, Bahrainian M, Channa R, Sheth SA, Giancardo L. Detection of Stroke with Retinal Microvascular Density and Self-Supervised Learning Using OCT-A and Fundus Imaging. J Clin Med 2022; 11:jcm11247408. [PMID: 36556024 PMCID: PMC9788382 DOI: 10.3390/jcm11247408] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/15/2022] Open
Abstract
Acute cerebral stroke is a leading cause of disability and death, which could be reduced with a prompt diagnosis during patient transportation to the hospital. A portable retina imaging system could enable this by measuring vascular information and blood perfusion in the retina and, due to the homology between retinal and cerebral vessels, infer if a cerebral stroke is underway. However, the feasibility of this strategy, the imaging features, and retina imaging modalities to do this are not clear. In this work, we show initial evidence of the feasibility of this approach by training machine learning models using feature engineering and self-supervised learning retina features extracted from OCT-A and fundus images to classify controls and acute stroke patients. Models based on macular microvasculature density features achieved an area under the receiver operating characteristic curve (AUC) of 0.87-0.88. Self-supervised deep learning models were able to generate features resulting in AUCs ranging from 0.66 to 0.81. While further work is needed for the final proof for a diagnostic system, these results indicate that microvasculature density features from OCT-A images have the potential to be used to diagnose acute cerebral stroke from the retina.
Collapse
Affiliation(s)
- Samiksha Pachade
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | - Ivan Coronado
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | - Rania Abdelkhaleq
- Department of Neurology, UTHealth McGovern Medical School, UTHealth, Houston, TX 77030, USA
| | - Juntao Yan
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
| | - Sergio Salazar-Marioni
- Department of Neurology, UTHealth McGovern Medical School, UTHealth, Houston, TX 77030, USA
| | - Amanda Jagolino
- Department of Neurology, UTHealth McGovern Medical School, UTHealth, Houston, TX 77030, USA
| | - Charles Green
- Institute for Stroke and Cerebrovascular Diseases, UTHealth, Houston, TX 77030, USA
- Center for Clinical Research and Evidence-Based Medicine, UTHealth McGovern Medical School, UTHealth, Houston, TX 77030, USA
| | - Mozhdeh Bahrainian
- Department of Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Roomasa Channa
- Department of Ophthalmology and Visual Sciences, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705, USA
| | - Sunil A. Sheth
- Department of Neurology, UTHealth McGovern Medical School, UTHealth, Houston, TX 77030, USA
| | - Luca Giancardo
- Center for Precision Health, School of Biomedical Informatics, University of Texas Health Science Center at Houston (UTHealth), Houston, TX 77030, USA
- Institute for Stroke and Cerebrovascular Diseases, UTHealth, Houston, TX 77030, USA
- Correspondence:
| |
Collapse
|
48
|
Deng X, Wang S, Yang Y, Chen A, Lu J, Hao J, Wu Y, Lu Q. Reduced macula microvascular densities may be an early indicator for diabetic peripheral neuropathy. Front Cell Dev Biol 2022; 10:1081285. [PMID: 36568975 PMCID: PMC9788121 DOI: 10.3389/fcell.2022.1081285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 11/22/2022] [Indexed: 12/13/2022] Open
Abstract
Purpose: To assess the alteration in the macular microvascular in type 2 diabetic patients with peripheral neuropathy (DPN) and without peripheral neuropathy (NDPN) by optical coherence tomography angiography (OCTA) and explore the correlation between retinal microvascular abnormalities and DPN disease. Methods: Twenty-seven healthy controls (42 eyes), 36 NDPN patients (62 eyes), and 27 DPN patients (40 eyes) were included. OCTA was used to image the macula in the superficial vascular complex (SVC) and deep vascular complex (DVC). In addition, a state-of-the-art deep learning method was employed to quantify the microvasculature of the two capillary plexuses in all participants using vascular length density (VLD). Results: Compared with the healthy control group, the average VLD values of patients with DPN in SVC (p = 0.010) and DVC (p = 0.011) were significantly lower. Compared with NDPN, DPN patients showed significantly reduced VLD values in the SVC (p = 0.006) and DVC (p = 0.001). Also, DPN patients showed lower VLD values (p < 0.05) in the nasal, superior, temporal and inferior sectors of the inner ring of the SVC when compared with controls; VLD values in NDPN patients were lower in the nasal section of the inner ring of SVC (p < 0.05) compared with healthy controls. VLD values in the DVC (AUC = 0.736, p < 0.001) of the DPN group showed a higher ability to discriminate microvascular damage when compared with NDPN. Conclusion: OCTA based on deep learning could be potentially used in clinical practice as a new indicator in the early diagnosis of DM with and without DPN.
Collapse
Affiliation(s)
- Xiaoyu Deng
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Shiqi Wang
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yan Yang
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Aizhen Chen
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Jinger Lu
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Jinkui Hao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yufei Wu
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Qinkang Lu
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| |
Collapse
|
49
|
Xu GX, Ren CX. SPNet: A novel deep neural network for retinal vessel segmentation based on shared decoder and pyramid-like loss. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
50
|
Lyu J, Zhang Y, Huang Y, Lin L, Cheng P, Tang X. AADG: Automatic Augmentation for Domain Generalization on Retinal Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3699-3711. [PMID: 35862336 DOI: 10.1109/tmi.2022.3193146] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Convolutional neural networks have been widely applied to medical image segmentation and have achieved considerable performance. However, the performance may be significantly affected by the domain gap between training data (source domain) and testing data (target domain). To address this issue, we propose a data manipulation based domain generalization method, called Automated Augmentation for Domain Generalization (AADG). Our AADG framework can effectively sample data augmentation policies that generate novel domains and diversify the training set from an appropriate search space. Specifically, we introduce a novel proxy task maximizing the diversity among multiple augmented novel domains as measured by the Sinkhorn distance in a unit sphere space, making automated augmentation tractable. Adversarial training and deep reinforcement learning are employed to efficiently search the objectives. Quantitative and qualitative experiments on 11 publicly-accessible fundus image datasets (four for retinal vessel segmentation, four for optic disc and cup (OD/OC) segmentation and three for retinal lesion segmentation) are comprehensively performed. Two OCTA datasets for retinal vasculature segmentation are further involved to validate cross-modality generalization. Our proposed AADG exhibits state-of-the-art generalization performance and outperforms existing approaches by considerable margins on retinal vessel, OD/OC and lesion segmentation tasks. The learned policies are empirically validated to be model-agnostic and can transfer well to other models. The source code is available at https://github.com/CRazorback/AADG.
Collapse
|