1
|
Jonnalagedda P, Weinberg B, Min TL, Bhanu S, Bhanu B. Computational modeling of tumor invasion from limited and diverse data in Glioblastoma. Comput Med Imaging Graph 2024; 117:102436. [PMID: 39342741 DOI: 10.1016/j.compmedimag.2024.102436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 05/25/2024] [Accepted: 09/17/2024] [Indexed: 10/01/2024]
Abstract
For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations - which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.
Collapse
Affiliation(s)
- Padmaja Jonnalagedda
- Department of Electrical and Computer Engineering, University of California, Riverside, United States of America.
| | - Brent Weinberg
- Department of Radiology and Imaging Sciences, Emory University, Atlanta GA, United States of America
| | - Taejin L Min
- Department of Radiology and Imaging Sciences, Emory University, Atlanta GA, United States of America
| | - Shiv Bhanu
- Department of Radiology, Riverside Community Hospital, Riverside CA, United States of America
| | - Bir Bhanu
- Department of Electrical and Computer Engineering, University of California, Riverside, United States of America
| |
Collapse
|
2
|
Rao F, Lyu T, Feng Z, Wu Y, Ni Y, Zhu W. A landmark-supervised registration framework for multi-phase CT images with cross-distillation. Phys Med Biol 2024; 69:115059. [PMID: 38768601 DOI: 10.1088/1361-6560/ad4e01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/20/2024] [Indexed: 05/22/2024]
Abstract
Objective.Multi-phase computed tomography (CT) has become a leading modality for identifying hepatic tumors. Nevertheless, the presence of misalignment in the images of different phases poses a challenge in accurately identifying and analyzing the patient's anatomy. Conventional registration methods typically concentrate on either intensity-based features or landmark-based features in isolation, so imposing limitations on the accuracy of the registration process.Method.We establish a nonrigid cycle-registration network that leverages semi-supervised learning techniques, wherein a point distance term based on Euclidean distance between registered landmark points is introduced into the loss function. Additionally, a cross-distillation strategy is proposed in network training to further improve registration performance which incorporates response-based knowledge concerning the distances between feature points.Results.We conducted experiments using multi-centered liver CT datasets to evaluate the performance of the proposed method. The results demonstrate that our method outperforms baseline methods in terms of target registration error. Additionally, Dice scores of the warped tumor masks were calculated. Our method consistently achieved the highest scores among all the comparing methods. Specifically, it achieved scores of 82.9% and 82.5% in the hepatocellular carcinoma and the intrahepatic cholangiocarcinoma dataset, respectively.Significance.The superior registration performance indicates its potential to serve as an important tool in hepatic tumor identification and analysis.
Collapse
Affiliation(s)
- Fan Rao
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Tianling Lyu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Zhan Feng
- Department of Radiology, College of Medicine, The First Affiliated Hospital, Zhejiang University, Hangzhou 311100, People's Republic of China
| | - Yuanfeng Wu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Yangfan Ni
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Wentao Zhu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| |
Collapse
|
3
|
Deng L, Zou Y, Yang X, Wang J, Huang S. L2NLF: a novel linear-to-nonlinear framework for multi-modal medical image registration. Biomed Eng Lett 2024; 14:497-509. [PMID: 38645595 PMCID: PMC11026354 DOI: 10.1007/s13534-023-00344-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 10/29/2023] [Accepted: 12/11/2023] [Indexed: 04/23/2024] Open
Abstract
In recent years, deep learning has ushered in significant development in medical image registration, and the method of non-rigid registration using deep neural networks to generate a deformation field has higher accuracy. However, unlike monomodal medical image registration, multimodal medical image registration is a more complex and challenging task. This paper proposes a new linear-to-nonlinear framework (L2NLF) for multimodal medical image registration. The first linear stage is essentially image conversion, which can reduce the difference between two images without changing the authenticity of medical images, thus transforming multimodal registration into monomodal registration. The second nonlinear stage is essentially unsupervised deformable registration based on the deep neural network. In this paper, a brand-new registration network, CrossMorph, is designed, a deep neural network similar to the U-net structure. As the backbone of the encoder, the volume CrossFormer block can better extract local and global information. Booster promotes the reduction of more deep features and shallow features. The qualitative and quantitative experimental results on T1 and T2 data of 240 patients' brains show that L2NLF can achieve excellent registration effect in the image conversion part with very low computation, and it will not change the authenticity of the converted image at all. Compared with the current state-of-the-art registration method, CrossMorph can effectively reduce average surface distance, improve dice score, and improve the deformation field's smoothness. The proposed methods have potential value in clinical application.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China
| | - Yanchao Zou
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China
| | - Jing Wang
- Institute for Brain Research and Rehabilitation, South China Normal University, Zhongshan Avenue, Guangzhou, 510631 China
| | - Sijuan Huang
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, 510060 Guangdong China
| |
Collapse
|
4
|
Han Z, Huang W. Discrete residual diffusion model for high-resolution prostate MRI synthesis. Phys Med Biol 2024; 69:055024. [PMID: 38271725 DOI: 10.1088/1361-6560/ad229e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/25/2024] [Indexed: 01/27/2024]
Abstract
Objective.High-resolution magnetic resonance imaging (HR MRI) is an effective tool for diagnosing PCa, but it requires patients to remain immobile for extended periods, increasing chances of image distortion due to motion. One solution is to utilize super-resolution (SR) techniques to process low-resolution (LR) images and create a higher-resolution version. However, existing medical SR models suffer from issues such as excessive smoothness and mode collapse. In this paper, we propose a novel generative model avoiding the problems of existing models, called discrete residual diffusion model (DR-DM).Approach.First, the forward process of DR-DM gradually disrupts the input via a fixed Markov chain, producing a sequence of latent variables with increasing noise. The backward process learns the conditional transit distribution and gradually match the target data distribution. By optimizing a variant of the variational lower bound, training diffusion models effectively address the issue of mode collapse. Second, to focus DR-DM on recovering high-frequency details, we synthesize residual images instead of synthesizing HR MRI directly. The residual image represents the difference between the HR and LR up-sampled MR image, and we convert residual image into discrete image tokens with a shorter sequence length by a vector quantized variational autoencoder (VQ-VAE), which reduced the computational complexity. Third, transformer architecture is integrated to model the relationship between LR MRI and residual image, which can capture the long-range dependencies between LR MRI and the synthesized imaging and improve the fidelity of reconstructed images.Main results.Extensive experimental validations have been performed on two popular yet challenging magnetic resonance image super-resolution tasks and compared to five state-of-the-art methods.Significance.Our experiments on the Prostate-Diagnosis and PROSTATEx datasets demonstrate that the DR-DM model significantly improves the signal-to-noise ratio of MRI for prostate cancer, resulting in greater clarity and improved diagnostic accuracy for patients.
Collapse
Affiliation(s)
- Zhitao Han
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, People's Republic of China
| | - Wenhui Huang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, People's Republic of China
| |
Collapse
|
5
|
Ma X, He J, Liu X, Liu Q, Chen G, Yuan B, Li C, Xia Y. Hierarchical cumulative network for unsupervised medical image registration. Comput Biol Med 2023; 167:107598. [PMID: 37913614 DOI: 10.1016/j.compbiomed.2023.107598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 09/23/2023] [Accepted: 10/17/2023] [Indexed: 11/03/2023]
Abstract
Unsupervised deep learning techniques have gained increasing popularity in deformable medical image registration However, existing methods usually overlook the optimal similarity position between moving and fixed images To tackle this issue, we propose a novel hierarchical cumulative network (HCN), which explicitly considers the optimal similarity position with an effective Bidirectional Asymmetric Registration Module (BARM). The BARM simultaneously learns two asymmetric displacement vector fields (DVFs) to optimally warp both moving images and fixed images to their optimal similar shape along the geodesic path. Furthermore, we incorporate the BARM into a Laplacian pyramid network with hierarchical recursion, in which the moving image at the lowest level of the pyramid is warped successively for aligning to the fixed image at the lowest level of the pyramid to capture multiple DVFs. We then accumulate these DVFs and up-sample them to warp the moving images at higher levels of the pyramid to align to the fixed image of the top level. The entire system is end-to-end and jointly trained in an unsupervised manner. Extensive experiments were conducted on two public 3D Brain MRI datasets to demonstrate that our HCN outperforms both the traditional and state-of-the-art registration methods. To further evaluate the performance of our HCN, we tested it on the validation set of the MICCAI Learn2Reg 2021 challenge. Additionally, a cross-dataset evaluation was conducted to assess the generalization of our HCN. Experimental results showed that our HCN is an effective deformable registration method and achieves excellent generalization performance.
Collapse
Affiliation(s)
- Xinke Ma
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Jiang He
- Huiying Medical Technology Co., Ltd., Room A206, B2, Dongsheng Science and Technology Park, Haidian District, Beijing 100192, China.
| | - Xing Liu
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Qin Liu
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Geng Chen
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| | - Bo Yuan
- Sichuan Provincial Health Information Center (Sichuan Provincial Health and Medical Big Data Center), Chengdu 610041, China.
| | - Changyang Li
- Sydney Polytechnic Institute, NSW 2000, Australia.
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
6
|
Ma X, Cui H, Li S, Yang Y, Xia Y. Deformable medical image registration with global-local transformation network and region similarity constraint. Comput Med Imaging Graph 2023; 108:102263. [PMID: 37487363 DOI: 10.1016/j.compmedimag.2023.102263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/04/2023] [Accepted: 06/07/2023] [Indexed: 07/26/2023]
Abstract
Deformable medical image registration can achieve fast and accurate alignment between two images, enabling medical professionals to analyze images of different subjects in a unified anatomical space. As such, it plays an important role in many medical image studies. Current deep learning (DL)-based approaches for image registration directly learn spatial transformation from one image to another, relying on a convolutional neural network and ground truth or similarity metrics. However, these methods only use a global similarity energy function to evaluate the similarity of a pair of images, which ignores the similarity of regions of interest (ROIs) within the images. This can limit the accuracy of the image registration and affect the analysis of specific ROIs. Additionally, DL-based methods often estimate global spatial transformations of images directly, without considering local spatial transformations of ROIs within the images. To address this issue, we propose a novel global-local transformation network with a region similarity constraint that maximizes the similarity of ROIs within the images and estimates both global and local spatial transformations simultaneously. Experiments conducted on four public 3D MRI datasets demonstrate that the proposed method achieves the highest registration performance in terms of accuracy and generalization compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Xinke Ma
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Hengfei Cui
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Shuoyan Li
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Yibo Yang
- King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
7
|
Che T, Wang X, Zhao K, Zhao Y, Zeng D, Li Q, Zheng Y, Yang N, Wang J, Li S. AMNet: Adaptive multi-level network for deformable registration of 3D brain MR images. Med Image Anal 2023; 85:102740. [PMID: 36682155 DOI: 10.1016/j.media.2023.102740] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/20/2022] [Accepted: 01/03/2023] [Indexed: 01/15/2023]
Abstract
Three-dimensional (3D) deformable image registration is a fundamental technique in medical image analysis tasks. Although it has been extensively investigated, current deep-learning-based registration models may face the challenges posed by deformations with various degrees of complexity. This paper proposes an adaptive multi-level registration network (AMNet) to retain the continuity of the deformation field and to achieve high-performance registration for 3D brain MR images. First, we design a lightweight registration network with an adaptive growth strategy to learn deformation field from multi-level wavelet sub-bands, which facilitates both global and local optimization and achieves registration with high performance. Second, our AMNet is designed for image-wise registration, which adapts the local importance of a region in accordance with the complexity degrees of its deformation, and thereafter improves the registration efficiency and maintains the continuity of the deformation field. Experimental results from five publicly-available brain MR datasets and a synthetic brain MR dataset show that our method achieves superior performance against state-of-the-art medical image registration approaches.
Collapse
Affiliation(s)
- Tongtong Che
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, Australia.
| | - Kun Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Yan Zhao
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Debin Zeng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Qiongling Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, China
| | - Ning Yang
- Department of Neurosurgery, Qilu Hospital of Shandong University and Brain Science Research Institute, Shandong University, Jinan, 250012, China
| | - Jian Wang
- Department of Neurosurgery, Qilu Hospital of Shandong University and Brain Science Research Institute, Shandong University, Jinan, 250012, China; Department of Biomedicine, University of Bergen, Jonas Lies Vei 91, 5009 Bergen, Norway
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
8
|
Li M, Hu S, Li G, Zhang F, Li J, Yang Y, Zhang L, Liu M, Xu Y, Fu D, Zhang W, Wang X. The Successive Next Network as Augmented Regularization for Deformable Brain MR Image Registration. SENSORS (BASEL, SWITZERLAND) 2023; 23:3208. [PMID: 36991918 PMCID: PMC10058981 DOI: 10.3390/s23063208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 02/27/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
Deep-learning-based registration methods can not only save time but also automatically extract deep features from images. In order to obtain better registration performance, many scholars use cascade networks to realize a coarse-to-fine registration progress. However, such cascade networks will increase network parameters by an n-times multiplication factor and entail long training and testing stages. In this paper, we only use a cascade network in the training stage. Unlike others, the role of the second network is to improve the registration performance of the first network and function as an augmented regularization term in the whole process. In the training stage, the mean squared error loss function between the dense deformation field (DDF) with which the second network has been trained and the zero field is added to constrain the learned DDF such that it tends to 0 at each position and to compel the first network to conceive of a better deformation field and improve the network's registration performance. In the testing stage, only the first network is used to estimate a better DDF; the second network is not used again. The advantages of this kind of design are reflected in two aspects: (1) it retains the good registration performance of the cascade network; (2) it retains the time efficiency of the single network in the testing stage. The experimental results show that the proposed method effectively improves the network's registration performance compared to other state-of-the-art methods.
Collapse
Affiliation(s)
| | - Shunbo Hu
- Correspondence: (S.H.); (G.L.); Tel.: +86-156-5397-6667 (S.H.)
| | - Guoqiang Li
- Correspondence: (S.H.); (G.L.); Tel.: +86-156-5397-6667 (S.H.)
| | | | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Han T, Wu J, Luo W, Wang H, Jin Z, Qu L. Review of Generative Adversarial Networks in mono- and cross-modal biomedical image registration. Front Neuroinform 2022; 16:933230. [PMID: 36483313 PMCID: PMC9724825 DOI: 10.3389/fninf.2022.933230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/13/2022] [Indexed: 09/19/2023] Open
Abstract
Biomedical image registration refers to aligning corresponding anatomical structures among different images, which is critical to many tasks, such as brain atlas building, tumor growth monitoring, and image fusion-based medical diagnosis. However, high-throughput biomedical image registration remains challenging due to inherent variations in the intensity, texture, and anatomy resulting from different imaging modalities, different sample preparation methods, or different developmental stages of the imaged subject. Recently, Generative Adversarial Networks (GAN) have attracted increasing interest in both mono- and cross-modal biomedical image registrations due to their special ability to eliminate the modal variance and their adversarial training strategy. This paper provides a comprehensive survey of the GAN-based mono- and cross-modal biomedical image registration methods. According to the different implementation strategies, we organize the GAN-based mono- and cross-modal biomedical image registration methods into four categories: modality translation, symmetric learning, adversarial strategies, and joint training. The key concepts, the main contributions, and the advantages and disadvantages of the different strategies are summarized and discussed. Finally, we analyze the statistics of all the cited works from different points of view and reveal future trends for GAN-based biomedical image registration studies.
Collapse
Affiliation(s)
- Tingting Han
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Wenting Luo
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Huiming Wang
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
| | - Zhe Jin
- School of Artificial Intelligence, Anhui University, Hefei, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computing and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
10
|
Ali H, Biswas R, Ali F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
11
|
A Transformer-Based Coarse-to-Fine Wide-Swath SAR Image Registration Method under Weak Texture Conditions. REMOTE SENSING 2022. [DOI: 10.3390/rs14051175] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
As an all-weather and all-day remote sensing image data source, SAR (Synthetic Aperture Radar) images have been widely applied, and their registration accuracy has a direct impact on the downstream task effectiveness. The existing registration algorithms mainly focus on small sub-images, and there is a lack of available accurate matching methods for large-size images. This paper proposes a high-precision, rapid, large-size SAR image dense-matching method. The method mainly includes four steps: down-sampling image pre-registration, sub-image acquisition, dense matching, and the transformation solution. First, the ORB (Oriented FAST and Rotated BRIEF) operator and the GMS (Grid-based Motion Statistics) method are combined to perform rough matching in the semantically rich down-sampled image. In addition, according to the feature point pairs, a group of clustering centers and corresponding images are obtained. Subsequently, a deep learning method based on Transformers is used to register images under weak texture conditions. Finally, the global transformation relationship can be obtained through RANSAC (Random Sample Consensus). Compared with the SOTA algorithm, our method's correct matching point numbers are increased by more than 2.47 times, and the root mean squared error (RMSE) is reduced by more than 4.16%. The experimental results demonstrate that our proposed method is efficient and accurate, which provides a new idea for SAR image registration.
Collapse
|