1
|
Wang Y, Yang D, Xu L, Yang S, Wang W, Zheng C, Zhang X, Wu B, Yin H, Yang Z, Xu H. Deep learning-based arterial subtraction images improve the detection of LR-TR algorithm for viable HCC on extracellular agents-enhanced MRI. Abdom Radiol (NY) 2024; 49:3078-3087. [PMID: 38642094 DOI: 10.1007/s00261-024-04277-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 04/22/2024]
Abstract
PURPOSE To determine the role of deep learning-based arterial subtraction images in viability assessment on extracellular agents-enhanced MRI using LR-TR algorithm. METHODS Patients diagnosed with HCC who underwent locoregional therapy were retrospectively collected. We constructed a deep learning-based subtraction model and automatically generated arterial subtraction images. Two radiologists evaluated LR-TR category on ordinary images and then evaluated again on ordinary images plus arterial subtraction images after a 2-month washout period. The reference standard for viability was tumor stain on the digital subtraction hepatic angiography within 1 month after MRI. RESULTS 286 observations of 105 patients were ultimately enrolled. 157 observations were viable and 129 observations were nonviable according to the reference standard. The sensitivity and accuracy of LR-TR algorithm for detecting viable HCC significantly increased with the application of arterial subtraction images (87.9% vs. 67.5%, p < 0.001; 86.4% vs. 75.9%, p < 0.001). And the specificity slightly decreased without significant difference when the arterial subtraction images were added (84.5% vs. 86.0%, p = 0.687). The AUC of LR-TR algorithm significantly increased with the addition of arterial subtraction images (0.862 vs. 0.768, p < 0.001). The arterial subtraction images also improved inter-reader agreement (0.857 vs. 0.727). CONCLUSION Extended application of deep learning-based arterial subtraction images on extracellular agents-enhanced MRI can increase the sensitivity of LR-TR algorithm for detecting viable HCC without significant change in specificity.
Collapse
Affiliation(s)
- Yuxin Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Yongan Road 95, West District, Beijing, 100050, China
| | - Dawei Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Yongan Road 95, West District, Beijing, 100050, China
| | - Lixue Xu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Yongan Road 95, West District, Beijing, 100050, China
| | - Siwei Yang
- Department of Interventional Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Wei Wang
- Department of Radiology, Zhuozhou Hospital, Zhuozhou, 072750, China
| | - Chao Zheng
- Shukun (Beijing) Technology Co., Ltd., Beijing, 102200, China
| | - Xiaolan Zhang
- Shukun (Beijing) Technology Co., Ltd., Beijing, 102200, China
| | - Botong Wu
- Shukun (Beijing) Technology Co., Ltd., Beijing, 102200, China
| | - Hongxia Yin
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Yongan Road 95, West District, Beijing, 100050, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Yongan Road 95, West District, Beijing, 100050, China.
| | - Hui Xu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Yongan Road 95, West District, Beijing, 100050, China.
| |
Collapse
|
2
|
Strittmatter A, Schad LR, Zöllner FG. Deep learning-based affine medical image registration for multimodal minimal-invasive image-guided interventions - A comparative study on generalizability. Z Med Phys 2024; 34:291-317. [PMID: 37355435 PMCID: PMC11156775 DOI: 10.1016/j.zemedi.2023.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 05/08/2023] [Accepted: 05/14/2023] [Indexed: 06/26/2023]
Abstract
Multimodal image registration is applied in medical image analysis as it allows the integration of complementary data from multiple imaging modalities. In recent years, various neural network-based approaches for medical image registration have been presented in papers, but due to the use of different datasets, a fair comparison is not possible. In this research 20 different neural networks for an affine registration of medical images were implemented. The networks' performance and the networks' generalizability to new datasets were evaluated using two multimodal datasets - a synthetic and a real patient dataset - of three-dimensional CT and MR images of the liver. The networks were first trained semi-supervised using the synthetic dataset and then evaluated on the synthetic dataset and the unseen patient dataset. Afterwards, the networks were finetuned on the patient dataset and subsequently evaluated on the patient dataset. The networks were compared using our own developed CNN as benchmark and a conventional affine registration with SimpleElastix as baseline. Six networks improved the pre-registration Dice coefficient of the synthetic dataset significantly (p-value < 0.05) and nine networks improved the pre-registration Dice coefficient of the patient dataset significantly and are therefore able to generalize to the new datasets used in our experiments. Many different machine learning-based methods have been proposed for affine multimodal medical image registration, but few are generalizable to new data and applications. It is therefore necessary to conduct further research in order to develop medical image registration techniques that can be applied more widely.
Collapse
Affiliation(s)
- Anika Strittmatter
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany.
| | - Lothar R Schad
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| | - Frank G Zöllner
- Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany; Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| |
Collapse
|
3
|
Abbasi S, Mehdizadeh A, Boveiri HR, Mosleh Shirazi MA, Javidan R, Khayami R, Tavakoli M. Unsupervised deep learning registration model for multimodal brain images. J Appl Clin Med Phys 2023; 24:e14177. [PMID: 37823748 PMCID: PMC10647957 DOI: 10.1002/acm2.14177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023] Open
Abstract
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
Collapse
Affiliation(s)
- Samaneh Abbasi
- Department of Medical Physics and EngineeringSchool of MedicineShiraz University of Medical SciencesShirazIran
| | - Alireza Mehdizadeh
- Research Center for Neuromodulation and PainShiraz University of Medical SciencesShirazIran
| | - Hamid Reza Boveiri
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Mohammad Amin Mosleh Shirazi
- Ionizing and Non‐Ionizing Radiation Protection Research Center, School of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Reza Javidan
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Raouf Khayami
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Meysam Tavakoli
- Department of Radiation Oncologyand Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
4
|
Tang X, Jafargholi Rangraz E, Heeren R, Coudyzer W, Maleux G, Baete K, Verslype C, Gooding MJ, Deroose CM, Nuyts J. Segmentation-guided multi-modal registration of liver images for dose estimation in SIRT. EJNMMI Phys 2022; 9:3. [PMID: 35076801 PMCID: PMC8790002 DOI: 10.1186/s40658-022-00432-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 01/12/2022] [Indexed: 12/04/2022] Open
Abstract
Purpose Selective internal radiation therapy (SIRT) requires a good liver registration of multi-modality images to obtain precise dose prediction and measurement. This study investigated the feasibility of liver registration of CT and MR images, guided by segmentation of the liver and its landmarks. The influence of the resulting lesion registration on dose estimation was evaluated. Methods The liver segmentation was done with a convolutional neural network (CNN), and the landmarks were segmented manually. Our image-based registration software and its liver-segmentation-guided extension (CNN-guided) were tuned and evaluated with 49 CT and 26 MR images from 20 SIRT patients. Each liver registration was evaluated by the root mean square distance (RMSD) of mean surface distance between manually delineated liver contours and mass center distance between manually delineated landmarks (lesions, clips, etc.). The root mean square of RMSDs (RRMSD) was used to evaluate all liver registrations. The CNN-guided registration was further extended by incorporating landmark segmentations (CNN&LM-guided) to assess the value of additional landmark guidance. To evaluate the influence of segmentation-guided registration on dose estimation, mean dose and volume percentages receiving at least 70 Gy (V70) estimated on the 99mTc-labeled macro-aggregated albumin (99mTc-MAA) SPECT were computed, either based on lesions from the reference 99mTc-MAA CT (reference lesions) or from the registered floating CT or MR images (registered lesions) using the CNN- or CNN&LM-guided algorithms. Results The RRMSD decreased for the floating CTs and MRs by 1.0 mm (11%) and 3.4 mm (34%) using CNN guidance for the image-based registration and by 2.1 mm (26%) and 1.4 mm (21%) using landmark guidance for the CNN-guided registration. The quartiles for the relative mean dose difference (the V70 difference) between the reference and registered lesions and their correlations [25th, 75th; r] are as follows: [− 5.5% (− 1.3%), 5.6% (3.4%); 0.97 (0.95)] and [− 12.3% (− 2.1%), 14.8% (2.9%); 0.96 (0.97)] for the CNN&LM- and CNN-guided CT to CT registrations, [− 7.7% (− 6.6%), 7.0% (3.1%); 0.97 (0.90)] and [− 15.1% (− 11.3%), 2.4% (2.5%); 0.91 (0.78)] for the CNN&LM- and CNN-guided MR to CT registrations. Conclusion Guidance by CNN liver segmentations and landmarks markedly improves the performance of the image-based registration. The small mean dose change between the reference and registered lesions demonstrates the feasibility of applying the CNN&LM- or CNN-guided registration to volume-level dose prediction. The CNN&LM- and CNN-guided registrations for CTs can be applied to voxel-level dose prediction according to their small V70 change for most lesions. The CNN-guided MR to CT registration still needs to incorporate landmark guidance for smaller change of voxel-level dose estimation.
Collapse
|
5
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
6
|
Hasenstab K, Cunha GM, Ichikawa S, Dehkordy SF, Lee MH, Kim SJ, Schlein A, Covarrubias Y, Sirlin CB, Fowler KJ. CNN color-coded difference maps accurately display longitudinal changes in liver MRI-PDFF. Eur Radiol 2021; 31:5041-5049. [PMID: 33449180 DOI: 10.1007/s00330-020-07649-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 11/24/2020] [Accepted: 12/18/2020] [Indexed: 01/19/2023]
Abstract
OBJECTIVES To assess the feasibility of a CNN-based liver registration algorithm to generate difference maps for visual display of spatiotemporal changes in liver PDFF, without needing manual annotations. METHODS This retrospective exploratory study included 25 patients with suspected or confirmed NAFLD, who underwent PDFF-MRI at two time points at our institution. PDFF difference maps were generated by applying a CNN-based liver registration algorithm, then subtracting follow-up from baseline PDFF maps. The difference maps were post-processed by smoothing (5 cm2 round kernel) and applying a categorical color scale. Two fellowship-trained abdominal radiologists and one radiology resident independently reviewed difference maps to visually determine segmental PDFF change. Their visual assessment was compared with manual ROI-based measurements of each Couinaud segment and whole liver PDFF using intraclass correlation (ICC) and Bland-Altman analysis. Inter-reader agreement for visual assessment was calculated (ICC). RESULTS The mean patient age was 49 years (12 males). Baseline and follow-up PDFF ranged from 2.0 to 35.3% and 3.5 to 32.0%, respectively. PDFF changes ranged from - 20.4 to 14.1%. ICCs against the manual reference exceeded 0.95 for each reader, except for segment 2 (2 readers ICC = 0.86-0.91) and segment 4a (reader 3 ICC = 0.94). Bland-Altman limits of agreement were within 5% across all three readers. Inter-reader agreement for visually assessed PDFF change (whole liver and segmental) was excellent (ICCs > 0.96), except for segment 2 (ICC = 0.93). CONCLUSIONS Visual assessment of liver segmental PDFF changes using a CNN-generated difference map strongly agreed with manual estimates performed by an expert reader and yielded high inter-reader agreement. KEY POINTS • Visual assessment of longitudinal changes in quantitative liver MRI can be performed using a CNN-generated difference map and yields strong agreement with manual estimates performed by expert readers.
Collapse
Affiliation(s)
- Kyle Hasenstab
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA.
- Department of Mathematics and Statistics, San Diego State University, San Diego, CA, USA.
| | - Guilherme Moura Cunha
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | | | - Soudabeh Fazeli Dehkordy
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Min Hee Lee
- Soonchunhyang University Bucheon Hospital, Gyeonggi-do, South Korea
| | - Soo Jin Kim
- National Cancer Center, Republic of Korea, Gyeonggi-do, South Korea
| | - Alexandra Schlein
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Yesenia Covarrubias
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Claude B Sirlin
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Kathryn J Fowler
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
7
|
Brunsing RL, Fowler KJ, Yokoo T, Cunha GM, Sirlin CB, Marks RM. Alternative approach of hepatocellular carcinoma surveillance: abbreviated MRI. HEPATOMA RESEARCH 2020; 6:59. [PMID: 33381651 PMCID: PMC7771881 DOI: 10.20517/2394-5079.2020.50] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
This review focuses on emerging abbreviated magnetic resonance imaging (AMRI) surveillance of patients with chronic liver disease for hepatocellular carcinoma (HCC). This surveillance strategy has been proposed as a high-sensitivity alternative to ultrasound for identification of patients with early-stage HCC, particularly in patients with cirrhosis or obesity, in whom sonographic visualization of small tumors may be compromised. Three general AMRI approaches have been developed and studied in the literature - non-contrast AMRI, dynamic contrast-enhanced AMRI, and hepatobiliary phase contrast-enhanced AMRI - each comprising a small number of selected sequences specifically tailored for HCC detection. The rationale, general technique, advantages and disadvantages, and diagnostic performance of each AMRI approach is explained. Additionally, current gaps in knowledge and future directions are discussed. Based on emerging evidence, we cautiously recommend the use of AMRI for HCC surveillance in situations where ultrasound is compromised.
Collapse
Affiliation(s)
- Ryan L. Brunsing
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Kathryn J. Fowler
- Liver Imaging Group, Department of Radiology, University of California San Diego, San Diego, CA 92093, USA
| | - Takeshi Yokoo
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Guilherme Moura Cunha
- Liver Imaging Group, Department of Radiology, University of California San Diego, San Diego, CA 92093, USA
| | - Claude B. Sirlin
- Liver Imaging Group, Department of Radiology, University of California San Diego, San Diego, CA 92093, USA
| | - Robert M. Marks
- Department of Radiology, Naval Medical Center San Diego, San Diego, CA 92134, USA
- Department of Radiology, Uniformed Services University of the Health Sciences, Bethesda, MD 20892, USA
| |
Collapse
|