1
|
Alksas A, Sharafeldeen A, Balaha HM, Haq MZ, Mahmoud A, Ghazal M, Alghamdi NS, Alhalabi M, Yousaf J, Sandhu H, El-Baz A. Advanced OCTA imaging segmentation: Unsupervised, non-linear retinal vessel detection using modified self-organizing maps and joint MGRF modeling. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108309. [PMID: 39002431 DOI: 10.1016/j.cmpb.2024.108309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 06/06/2024] [Accepted: 06/25/2024] [Indexed: 07/15/2024]
Abstract
BACKGROUND AND OBJECTIVE This paper proposes a fully automated and unsupervised stochastic segmentation approach using two-level joint Markov-Gibbs Random Field (MGRF) to detect the vascular system from retinal Optical Coherence Tomography Angiography (OCTA) images, which is a critical step in developing Computer-Aided Diagnosis (CAD) systems for detecting retinal diseases. METHODS Using a new probabilistic model based on a Linear Combination of Discrete Gaussian (LCDG), the first level models the appearance of OCTA images and their spatially smoothed images. The parameters of the LCDG model are estimated using a modified Expectation Maximization (EM) algorithm. The second level models the maps of OCTA images, including the vascular system and other retina tissues, using MGRF with analytically estimated parameters from the input images. The proposed segmentation approach employs modified self-organizing maps as a MAP-based optimizer maximizing the joint likelihood and handles the Joint MGRF model in a new, unsupervised way. This approach deviates from traditional stochastic optimization approaches and leverages non-linear optimization to achieve more accurate segmentation results. RESULTS The proposed segmentation framework is evaluated quantitatively on a dataset of 204 subjects. Achieving 0.92 ± 0.03 Dice similarity coefficient, 0.69 ± 0.25 95-percentile bidirectional Hausdorff distance, and 0.93 ± 0.03 accuracy, confirms the superior performance of the proposed approach. CONCLUSIONS The conclusions drawn from the study highlight the superior performance of the proposed unsupervised and fully automated segmentation approach in detecting the vascular system from OCTA images. This approach not only deviates from traditional methods but also achieves more accurate segmentation results, demonstrating its potential in aiding the development of CAD systems for detecting retinal diseases.
Collapse
Affiliation(s)
- Ahmed Alksas
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Hossam Magdy Balaha
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammad Z Haq
- School of Medicine, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohamed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| | - Harpal Sandhu
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA.
| |
Collapse
|
2
|
Brown EE, Guy AA, Holroyd NA, Sweeney PW, Gourmet L, Coleman H, Walsh C, Markaki AE, Shipley R, Rajendram R, Walker-Samuel S. Physics-informed deep generative learning for quantitative assessment of the retina. Nat Commun 2024; 15:6859. [PMID: 39127778 PMCID: PMC11316734 DOI: 10.1038/s41467-024-50911-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 07/25/2024] [Indexed: 08/12/2024] Open
Abstract
Disruption of retinal vasculature is linked to various diseases, including diabetic retinopathy and macular degeneration, leading to vision loss. We present here a novel algorithmic approach that generates highly realistic digital models of human retinal blood vessels, based on established biophysical principles, including fully-connected arterial and venous trees with a single inlet and outlet. This approach, using physics-informed generative adversarial networks (PI-GAN), enables the segmentation and reconstruction of blood vessel networks with no human input and which out-performs human labelling. Segmentation of DRIVE and STARE retina photograph datasets provided near state-of-the-art vessel segmentation, with training on only a small (n = 100) simulated dataset. Our findings highlight the potential of PI-GAN for accurate retinal vasculature characterization, with implications for improving early disease detection, monitoring disease progression, and improving patient care.
Collapse
Affiliation(s)
- Emmeline E Brown
- Centre for Computational Medicine, University College London, London, UK
- Moorfields Eye Hospital, London, UK
| | - Andrew A Guy
- Centre for Computational Medicine, University College London, London, UK
- Department of Engineering, University of Cambridge, Cambridge, UK
| | - Natalie A Holroyd
- Centre for Computational Medicine, University College London, London, UK
| | - Paul W Sweeney
- Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK
| | - Lucie Gourmet
- Centre for Computational Medicine, University College London, London, UK
| | - Hannah Coleman
- Centre for Computational Medicine, University College London, London, UK
| | - Claire Walsh
- Centre for Computational Medicine, University College London, London, UK
- Department of Mechanical Engineering, University College London, London, UK
| | - Athina E Markaki
- Department of Engineering, University of Cambridge, Cambridge, UK
| | - Rebecca Shipley
- Centre for Computational Medicine, University College London, London, UK
- Department of Mechanical Engineering, University College London, London, UK
| | - Ranjan Rajendram
- Moorfields Eye Hospital, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | | |
Collapse
|
3
|
Messica S, Presil D, Hoch Y, Lev T, Hadad A, Katz O, Owens DR. Enhancing stroke risk and prognostic timeframe assessment with deep learning and a broad range of retinal biomarkers. Artif Intell Med 2024; 154:102927. [PMID: 38991398 DOI: 10.1016/j.artmed.2024.102927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/15/2024] [Accepted: 06/25/2024] [Indexed: 07/13/2024]
Abstract
Stroke stands as a major global health issue, causing high death and disability rates and significant social and economic burdens. The effectiveness of existing stroke risk assessment methods is questionable due to their use of inconsistent and varying biomarkers, which may lead to unpredictable risk evaluations. This study introduces an automatic deep learning-based system for predicting stroke risk (both ischemic and hemorrhagic) and estimating the time frame of its occurrence, utilizing a comprehensive set of known retinal biomarkers from fundus images. Our system, tested on the UK Biobank and DRSSW datasets, achieved AUROC scores of 0.83 (95% CI: 0.79-0.85) and 0.93 (95% CI: 0.9-0.95), respectively. These results not only highlight our system's advantage over established benchmarks but also underscore the predictive power of retinal biomarkers in assessing stroke risk and the unique effectiveness of each biomarker. Additionally, the correlation between retinal biomarkers and cardiovascular diseases broadens the potential application of our system, making it a versatile tool for predicting a wide range of cardiovascular conditions.
Collapse
Affiliation(s)
| | - Dan Presil
- NEC Israeli Research Center, Herzliya, Israel
| | - Yaacov Hoch
- NEC Israeli Research Center, Herzliya, Israel
| | - Tsvi Lev
- NEC Israeli Research Center, Herzliya, Israel
| | - Aviel Hadad
- Ophthalmology Department, Soroka University Medical Center, Be'er Sheva, South District, Israel
| | - Or Katz
- NEC Israeli Research Center, Herzliya, Israel
| | - David R Owens
- Swansea University Medical School, Swansea, Wales, UK
| |
Collapse
|
4
|
Huang Y, Deng T. Multi-level spatial-temporal and attentional information deep fusion network for retinal vessel segmentation. Phys Med Biol 2023; 68:195026. [PMID: 37567227 DOI: 10.1088/1361-6560/acefa0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective.Automatic segmentation of fundus vessels has the potential to enhance the judgment ability of intelligent disease diagnosis systems. Even though various methods have been proposed, it is still a demanding task to accurately segment the fundus vessels. The purpose of our study is to develop a robust and effective method to segment the vessels in human color retinal fundus images.Approach.We present a novel multi-level spatial-temporal and attentional information deep fusion network for the segmentation of retinal vessels, called MSAFNet, which enhances segmentation performance and robustness. Our method utilizes the multi-level spatial-temporal encoding module to obtain spatial-temporal information and the Self-Attention module to capture feature correlations in different levels of our network. Based on the encoder and decoder structure, we combine these features to get the final segmentation results.Main results.Through abundant experiments on four public datasets, our method achieves preferable performance compared with other SOTA retinal vessel segmentation methods. Our Accuracy and Area Under Curve achieve the highest scores of 96.96%, 96.57%, 96.48% and 98.78%, 98.54%, 98.27% on DRIVE, CHASE_DB1, and HRF datasets. Our Specificity achieves the highest score of 98.58% and 99.08% on DRIVE and STARE datasets.Significance.The experimental results demonstrate that our method has strong learning and representation capabilities and can accurately detect retinal blood vessels, thereby serving as a potential tool for assisting in diagnosis.
Collapse
Affiliation(s)
- Yi Huang
- School of Information Science and Technology, Southwest Jiaotong University, 611756, Chengdu, People's Republic of China
| | - Tao Deng
- School of Information Science and Technology, Southwest Jiaotong University, 611756, Chengdu, People's Republic of China
| |
Collapse
|
5
|
Zhang H, Ni W, Luo Y, Feng Y, Song R, Wang X. TUnet-LBF: Retinal fundus image fine segmentation model based on transformer Unet network and LBF. Comput Biol Med 2023; 159:106937. [PMID: 37084640 DOI: 10.1016/j.compbiomed.2023.106937] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 04/01/2023] [Accepted: 04/13/2023] [Indexed: 04/23/2023]
Abstract
Segmentation of retinal fundus images is a crucial part of medical diagnosis. Automatic extraction of blood vessels in low-quality retinal images remains a challenging problem. In this paper, we propose a novel two-stage model combining Transformer Unet (TUnet) and local binary energy function model (LBF), TUnet-LBF, for coarse to fine segmentation of retinal vessels. In the coarse segmentation stage, the global topological information of blood vessels is obtained by TUnet. The neural network outputs the initial contour and the probability maps, which are input to the fine segmentation stage as the priori information. In the fine segmentation stage, an energy modulated LBF model is proposed to obtain the local detail information of blood vessels. The proposed model reaches accuracy (Acc) of 0.9650, 0.9681 and 0.9708 on the public datasets DRIVE, STARE and CHASE_DB1 respectively. The experimental results demonstrate the effectiveness of each component in the proposed model.
Collapse
Affiliation(s)
- Hanyu Zhang
- School of Geography, Liaoning Normal University, Dalian City, 116029, China; School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China; College of Information Science and Engineering, Northeastern University, Shenyang, 110167, China.
| | - Weihan Ni
- School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China.
| | - Yi Luo
- College of Information Science and Engineering, Northeastern University, Shenyang, 110167, China.
| | - Yining Feng
- School of Geography, Liaoning Normal University, Dalian City, 116029, China.
| | - Ruoxi Song
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Xianghai Wang
- School of Geography, Liaoning Normal University, Dalian City, 116029, China; School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China.
| |
Collapse
|
6
|
Liu M, Wang Z, Li H, Wu P, Alsaadi FE, Zeng N. AA-WGAN: Attention augmented Wasserstein generative adversarial network with application to fundus retinal vessel segmentation. Comput Biol Med 2023; 158:106874. [PMID: 37019013 DOI: 10.1016/j.compbiomed.2023.106874] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/15/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
In this paper, a novel attention augmented Wasserstein generative adversarial network (AA-WGAN) is proposed for fundus retinal vessel segmentation, where a U-shaped network with attention augmented convolution and squeeze-excitation module is designed to serve as the generator. In particular, the complex vascular structures make some tiny vessels hard to segment, while the proposed AA-WGAN can effectively handle such imperfect data property, which is competent in capturing the dependency among pixels in the whole image to highlight the regions of interests via the applied attention augmented convolution. By applying the squeeze-excitation module, the generator is able to pay attention to the important channels of the feature maps, and the useless information can be suppressed as well. In addition, gradient penalty method is adopted in the WGAN backbone to alleviate the phenomenon of generating large amounts of repeated images due to excessive concentration on accuracy. The proposed model is comprehensively evaluated on three datasets DRIVE, STARE, and CHASE_DB1, and the results show that the proposed AA-WGAN is a competitive vessel segmentation model as compared with several other advanced models, which obtains the accuracy of 96.51%, 97.19% and 96.94% on each dataset, respectively. The effectiveness of the applied important components is validated by ablation study, which also endows the proposed AA-WGAN with considerable generalization ability.
Collapse
|
7
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
8
|
Xu GX, Ren CX. SPNet: A novel deep neural network for retinal vessel segmentation based on shared decoder and pyramid-like loss. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
9
|
Li H, Tang Z, Nan Y, Yang G. Human treelike tubular structure segmentation: A comprehensive review and future perspectives. Comput Biol Med 2022; 151:106241. [PMID: 36379190 DOI: 10.1016/j.compbiomed.2022.106241] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/16/2022] [Accepted: 10/22/2022] [Indexed: 12/27/2022]
Abstract
Various structures in human physiology follow a treelike morphology, which often expresses complexity at very fine scales. Examples of such structures are intrathoracic airways, retinal blood vessels, and hepatic blood vessels. Large collections of 2D and 3D images have been made available by medical imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT), Optical coherence tomography (OCT) and ultrasound in which the spatial arrangement can be observed. Segmentation of these structures in medical imaging is of great importance since the analysis of the structure provides insights into disease diagnosis, treatment planning, and prognosis. Manually labelling extensive data by radiologists is often time-consuming and error-prone. As a result, automated or semi-automated computational models have become a popular research field of medical imaging in the past two decades, and many have been developed to date. In this survey, we aim to provide a comprehensive review of currently publicly available datasets, segmentation algorithms, and evaluation metrics. In addition, current challenges and future research directions are discussed.
Collapse
Affiliation(s)
- Hao Li
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom; Department of Bioengineering, Faculty of Engineering, Imperial College London, London, United Kingdom
| | - Zeyu Tang
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom; Department of Bioengineering, Faculty of Engineering, Imperial College London, London, United Kingdom
| | - Yang Nan
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Guang Yang
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom; Royal Brompton Hospital, London, United Kingdom.
| |
Collapse
|
10
|
A Hybrid Fusion Method Combining Spatial Image Filtering with Parallel Channel Network for Retinal Vessel Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Guo S. CSGNet: Cascade semantic guided net for retinal vessel segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
12
|
Zhao J, Hou X, Pan M, Zhang H. Attention-based generative adversarial network in medical imaging: A narrative review. Comput Biol Med 2022; 149:105948. [PMID: 35994931 DOI: 10.1016/j.compbiomed.2022.105948] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/24/2022] [Accepted: 08/06/2022] [Indexed: 11/18/2022]
Abstract
As a popular probabilistic generative model, generative adversarial network (GAN) has been successfully used not only in natural image processing, but also in medical image analysis and computer-aided diagnosis. Despite the various advantages, the applications of GAN in medical image analysis face new challenges. The introduction of attention mechanisms, which resemble the human visual system that focuses on the task-related local image area for certain information extraction, has drawn increasing interest. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to summarize the applications of using transformer-based GAN for medical image analysis. We reviewed recent advances in techniques combining various attention modules with different adversarial training schemes, and their applications in medical segmentation, synthesis and detection. Several recent studies have shown that attention modules can be effectively incorporated into a GAN model in detecting lesion areas and extracting diagnosis-related feature information precisely, thus providing a useful tool for medical image processing and diagnosis. This review indicates that research on the medical imaging analysis of GAN and attention mechanisms is still at an early stage despite the great potential. We highlight the attention-based generative adversarial network is an efficient and promising computational model advancing future research and applications in medical image analysis.
Collapse
Affiliation(s)
- Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Meiqing Pan
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
13
|
Zhou Y, Wagner SK, Chia MA, Zhao A, Woodward-Court P, Xu M, Struyven R, Alexander DC, Keane PA. AutoMorph: Automated Retinal Vascular Morphology Quantification Via a Deep Learning Pipeline. Transl Vis Sci Technol 2022; 11:12. [PMID: 35833885 PMCID: PMC9290317 DOI: 10.1167/tvst.11.7.12] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 06/06/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose To externally validate a deep learning pipeline (AutoMorph) for automated analysis of retinal vascular morphology on fundus photographs. AutoMorph has been made publicly available, facilitating widespread research in ophthalmic and systemic diseases. Methods AutoMorph consists of four functional modules: image preprocessing, image quality grading, anatomical segmentation (including binary vessel, artery/vein, and optic disc/cup segmentation), and vascular morphology feature measurement. Image quality grading and anatomical segmentation use the most recent deep learning techniques. We employ a model ensemble strategy to achieve robust results and analyze the prediction confidence to rectify false gradable cases in image quality grading. We externally validate the performance of each module on several independent publicly available datasets. Results The EfficientNet-b4 architecture used in the image grading module achieves performance comparable to that of the state of the art for EyePACS-Q, with an F1-score of 0.86. The confidence analysis reduces the number of images incorrectly assessed as gradable by 76%. Binary vessel segmentation achieves an F1-score of 0.73 on AV-WIDE and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph segmentation map and expert annotation show good to excellent agreement. Conclusions AutoMorph modules perform well even when external validation data show domain differences from training data (e.g., with different imaging devices). This fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis of retinal vascular morphology on color fundus photographs. Translational Relevance By making AutoMorph publicly available and open source, we hope to facilitate ophthalmic and systemic disease research, particularly in the emerging field of oculomics.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Mark A. Chia
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Peter Woodward-Court
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Moucheng Xu
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Robbert Struyven
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Daniel C. Alexander
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Pearse A. Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| |
Collapse
|
14
|
Iqbal A, Sharif M, Yasmin M, Raza M, Aftab S. Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:333-368. [PMID: 35821891 PMCID: PMC9264294 DOI: 10.1007/s13735-022-00240-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 05/13/2023]
Abstract
Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Shabib Aftab
- Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan
| |
Collapse
|
15
|
Generative adversarial network based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.075] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
16
|
An innovative medical image synthesis based on dual GAN deep neural networks for improved segmentation quality. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03682-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
18
|
Li Z, Jia M, Yang X, Xu M. Blood Vessel Segmentation of Retinal Image Based on Dense-U-Net Network. MICROMACHINES 2021; 12:mi12121478. [PMID: 34945328 PMCID: PMC8705734 DOI: 10.3390/mi12121478] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/25/2021] [Accepted: 11/25/2021] [Indexed: 11/02/2022]
Abstract
The accurate segmentation of retinal blood vessels in fundus is of great practical significance to help doctors diagnose fundus diseases. Aiming to solve the problems of serious segmentation errors and low accuracy in traditional retinal segmentation, a scheme based on the combination of U-Net and Dense-Net was proposed. Firstly, the vascular feature information was enhanced by fusion limited contrast histogram equalization, median filtering, data normalization and multi-scale morphological transformation, and the artifact was corrected by adaptive gamma correction. Secondly, the randomly extracted image blocks are used as training data to increase the data and improve the generalization ability. Thirdly, stochastic gradient descent was used to optimize the Dice loss function to improve the segmentation accuracy. Finally, the Dense-U-net model was used for segmentation. The specificity, accuracy, sensitivity and AUC of this algorithm are 0.9896, 0.9698, 0.7931, 0.8946 and 0.9738, respectively. The proposed method improves the segmentation accuracy of vessels and the segmentation of small vessels.
Collapse
|
19
|
Guo S. Fundus image segmentation via hierarchical feature learning. Comput Biol Med 2021; 138:104928. [PMID: 34662814 DOI: 10.1016/j.compbiomed.2021.104928] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 10/06/2021] [Accepted: 10/06/2021] [Indexed: 01/28/2023]
Abstract
Fundus Image Segmentation (FIS) is an essential procedure for the automated diagnosis of ophthalmic diseases. Recently, deep fully convolutional networks have been widely used for FIS with state-of-the-art performance. The representative deep model is the U-Net, which follows an encoder-decoder architecture. I believe it is suboptimal for FIS because consecutive pooling operations in the encoder lead to low-resolution representation and loss of detailed spatial information, which is particularly important for the segmentation of tiny vessels and lesions. Motivated by this, a high-resolution hierarchical network (HHNet) is proposed to learn semantic-rich high-resolution representations and preserve spatial details simultaneously. Specifically, a High-resolution Feature Learning (HFL) module with increasing dilation rates was first designed to learn the high-level high-resolution representations. Then, the HHNet was constructed by incorporating three HFL modules and two feature aggregation modules. The HHNet runs in a coarse-to-fine manner, and fine segmentation maps are output at the last level. Extensive experiments were conducted on fundus lesion segmentation, vessel segmentation, and optic cup segmentation. The experimental results reveal that the proposed method shows highly competitive or even superior performance in terms of segmentation performance and computation cost, indicating its potential advantages in clinical application.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, 710055, China.
| |
Collapse
|
20
|
|