1
|
Bendszus M, Laghi A, Munuera J, Tanenbaum LN, Taouli B, Thoeny HC. MRI Gadolinium-Based Contrast Media: Meeting Radiological, Clinical, and Environmental Needs. J Magn Reson Imaging 2024; 60:1774-1785. [PMID: 38226697 DOI: 10.1002/jmri.29181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 11/24/2023] [Accepted: 11/29/2023] [Indexed: 01/17/2024] Open
Abstract
Gadolinium-based contrast agents (GBCAs) are routinely used in magnetic resonance imaging (MRI). They are essential for choosing the most appropriate medical or surgical strategy for patients with serious pathologies, particularly in oncologic, inflammatory, and cardiovascular diseases. However, GBCAs have been associated with an increased risk of nephrogenic systemic fibrosis in patients with renal failure, as well as the possibility of deposition in the brain, bones, and other organs, even in patients with normal renal function. Research is underway to reduce the quantity of gadolinium injected, without compromising image quality and diagnosis. The next generation of GBCAs will enable a reduction in the gadolinium dose administered. Gadopiclenol is the first of this new generation of GBCAs, with high relaxivity, thus having the potential to reduce the gadolinium dose while maintaining good in vivo stability due to its macrocyclic structure. High-stability and high-relaxivity GBCAs will be one of the solutions for reducing the dose of gadolinium to be administered in clinical practice, while the development of new technologies, including optimization of MRI acquisitions, new contrast mechanisms, and artificial intelligence may help reduce the need for GBCAs. Future solutions may involve a combination of next-generation GBCAs and image-processing techniques to optimize diagnosis and treatment planning while minimizing exposure to gadolinium. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 3.
Collapse
Affiliation(s)
- Martin Bendszus
- Department of Neuroradiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Andrea Laghi
- Department of Medical Surgical Sciences and Translational Medicine, Faculty of Medicine and Psychology, Sapienza University of Rome, Sant'Andrea University Hospital, Rome, Italy
| | - Josep Munuera
- Advanced Medical Imaging, Artificial Intelligence, and Imaging-Guided Therapy Research Group, Institut de Recerca Sant Pau - Centre CERCA, Barcelona, Spain
- Diagnostic Imaging Department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | | | - Bachir Taouli
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Harriet C Thoeny
- Department of Diagnostic and Interventional Radiology, Fribourg Cantonal Hospital, Fribourg, Switzerland
- Faculty of Medicine, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
2
|
Yin R, Dou Z, Wang Y, Zhang Q, Guo Y, Wang Y, Chen Y, Zhang C, Li H, Jian X, Qi L, Ma W. Preoperative CECT-Based Multitask Model Predicts Peritoneal Recurrence and Disease-Free Survival in Advanced Ovarian Cancer: A Multicenter Study. Acad Radiol 2024; 31:4488-4498. [PMID: 38693025 DOI: 10.1016/j.acra.2024.04.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 04/13/2024] [Accepted: 04/14/2024] [Indexed: 05/03/2024]
Abstract
RATIONALE AND OBJECTIVES Peritoneal recurrence is the predominant pattern of recurrence in advanced ovarian cancer (AOC) and portends a dismal prognosis. Accurate prediction of peritoneal recurrence and disease-free survival (DFS) is crucial to identify patients who might benefit from intensive treatment. We aimed to develop a predictive model for peritoneal recurrence and prognosis in AOC. METHODS In this retrospective multi-institution study of 515 patients, an end-to-end multi-task convolutional neural network (MCNN) comprising a segmentation convolutional neural network (CNN) and a classification CNN was developed and tested using preoperative CT images, and MCNN-score was generated to indicate the peritoneal recurrence and DFS status in patients with AOC. We evaluated the accuracy of the model for automatic segmentation and predict prognosis. RESULTS The MCNN achieved promising segmentation performances with a mean Dice coefficient of 84.3% (range: 78.8%-87.0%). The MCNN was able to predict peritoneal recurrence in the training (AUC 0.87; 95% CI 0.82-0.90), internal test (0.88; 0.85-0.92), and external test set (0.82; 0.78-0.86). Similarly, MCNN demonstrated consistently high accuracy in predicting recurrence, with an AUC of 0.85; 95% CI 0.82-0.88, 0.83; 95% CI 0.80-0.86, and 0.85; 95% CI 0.83-0.88. For patients with a high MCNN-score of recurrence, it was associated with poorer DFS with P < 0.0001 and hazard ratios of 0.1964 (95% CI: 0.1439-0.2680), 0.3249 (95% CI: 0.1896-0.5565), and 0.3458 (95% CI: 0.2582-0.4632). CONCLUSION The MCNN approach demonstrated high performance in predicting peritoneal recurrence and DFS in patients with AOC.
Collapse
Affiliation(s)
- Rui Yin
- National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China; School of Biomedical Engineering & Technology, Tianjin Medical University, Tianjin 300203, China
| | - Zhaoxiang Dou
- Department of Breast Imaging, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Yanyan Wang
- Department of CT and MRI, Shanxi Tumor Hospital, Taiyuan 030013, China
| | - Qian Zhang
- Department of Radiology, Baoding No. 1 Central Hospital, Baoding 071030, China
| | - Yijun Guo
- Department of Breast Imaging, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Yigeng Wang
- Department of Radiology, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Ying Chen
- Department of Gynecologic Oncology, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Chao Zhang
- Department of Bone Cancer, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Huiyang Li
- Department of Gynecology and Obstetrics, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Xiqi Jian
- School of Biomedical Engineering & Technology, Tianjin Medical University, Tianjin 300203, China
| | - Lisha Qi
- Department of Pathology, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China
| | - Wenjuan Ma
- Department of Breast Imaging, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin 300060, China.
| |
Collapse
|
3
|
Jung HK, Kim K, Park JE, Kim N. Image-Based Generative Artificial Intelligence in Radiology: Comprehensive Updates. Korean J Radiol 2024; 25:959-981. [PMID: 39473088 PMCID: PMC11524689 DOI: 10.3348/kjr.2024.0392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 11/02/2024] Open
Abstract
Generative artificial intelligence (AI) has been applied to images for image quality enhancement, domain transfer, and augmentation of training data for AI modeling in various medical fields. Image-generative AI can produce large amounts of unannotated imaging data, which facilitates multiple downstream deep-learning tasks. However, their evaluation methods and clinical utility have not been thoroughly reviewed. This article summarizes commonly used generative adversarial networks and diffusion models. In addition, it summarizes their utility in clinical tasks in the field of radiology, such as direct image utilization, lesion detection, segmentation, and diagnosis. This article aims to guide readers regarding radiology practice and research using image-generative AI by 1) reviewing basic theories of image-generative AI, 2) discussing the methods used to evaluate the generated images, 3) outlining the clinical and research utility of generated images, and 4) discussing the issue of hallucinations.
Collapse
Affiliation(s)
- Ha Kyung Jung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kiduk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Tian S, Liu Y, Mao X, Xu X, He S, Jia L, Zhang W, Peng P, Wang J. A multicenter study on deep learning for glioblastoma auto-segmentation with prior knowledge in multimodal imaging. Cancer Sci 2024; 115:3415-3425. [PMID: 39119927 PMCID: PMC11447882 DOI: 10.1111/cas.16304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 07/19/2024] [Accepted: 07/22/2024] [Indexed: 08/10/2024] Open
Abstract
A precise radiotherapy plan is crucial to ensure accurate segmentation of glioblastomas (GBMs) for radiation therapy. However, the traditional manual segmentation process is labor-intensive and heavily reliant on the experience of radiation oncologists. In this retrospective study, a novel auto-segmentation method is proposed to address these problems. To assess the method's applicability across diverse scenarios, we conducted its development and evaluation using a cohort of 148 eligible patients drawn from four multicenter datasets and retrospective data collection including noncontrast CT, multisequence MRI scans, and corresponding medical records. All patients were diagnosed with histologically confirmed high-grade glioma (HGG). A deep learning-based method (PKMI-Net) for automatically segmenting gross tumor volume (GTV) and clinical target volumes (CTV1 and CTV2) of GBMs was proposed by leveraging prior knowledge from multimodal imaging. The proposed PKMI-Net demonstrated high accuracy in segmenting, respectively, GTV, CTV1, and CTV2 in an 11-patient test set, achieving Dice similarity coefficients (DSC) of 0.94, 0.95, and 0.92; 95% Hausdorff distances (HD95) of 2.07, 1.18, and 3.95 mm; average surface distances (ASD) of 0.69, 0.39, and 1.17 mm; and relative volume differences (RVD) of 5.50%, 9.68%, and 3.97%. Moreover, the vast majority of GTV, CTV1, and CTV2 produced by PKMI-Net are clinically acceptable and require no revision for clinical practice. In our multicenter evaluation, the PKMI-Net exhibited consistent and robust generalizability across the various datasets, demonstrating its effectiveness in automatically segmenting GBMs. The proposed method using prior knowledge in multimodal imaging can improve the contouring accuracy of GBMs, which holds the potential to improve the quality and efficiency of GBMs' radiotherapy.
Collapse
Affiliation(s)
- Suqing Tian
- Department of Radiation OncologyPeking University Third HospitalBeijingChina
| | - Yinglong Liu
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Xinhui Mao
- Radiotherapy CenterPeople's Hospital of Xinjiang Uygur Autonomous RegionUrumqiChina
| | - Xin Xu
- Department of Radiation OncologyThe Second Affiliated Hospital of Shandong First Medical UniversityTai'anChina
| | - Shumeng He
- Intelligent Radiation Treatment LaboratoryUnited Imaging Research Institute of Intelligent ImagingBeijingChina
| | - Lecheng Jia
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Wei Zhang
- Radiotherapy Business UnitShanghai United Imaging Healthcare Co., Ltd.ShanghaiChina
| | - Peng Peng
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Junjie Wang
- Department of Radiation OncologyPeking University Third HospitalBeijingChina
| |
Collapse
|
5
|
Tsui B, Calabrese E, Zaharchuk G, Rauschecker AM. Reducing Gadolinium Contrast With Artificial Intelligence. J Magn Reson Imaging 2024; 60:848-859. [PMID: 37905681 DOI: 10.1002/jmri.29095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/11/2023] [Accepted: 10/12/2023] [Indexed: 11/02/2023] Open
Abstract
Gadolinium contrast is an important agent in magnetic resonance imaging (MRI), particularly in neuroimaging where it can help identify blood-brain barrier breakdown from an inflammatory, infectious, or neoplastic process. However, gadolinium contrast has several drawbacks, including nephrogenic systemic fibrosis, gadolinium deposition in the brain and bones, and allergic-like reactions. As computer hardware and technology continues to evolve, machine learning has become a possible solution for eliminating or reducing the dose of gadolinium contrast. This review summarizes the clinical uses of gadolinium contrast, the risks of gadolinium contrast, and state-of-the-art machine learning methods that have been applied to reduce or eliminate gadolinium contrast administration, as well as their current limitations, with a focus on neuroimaging applications. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Brian Tsui
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Evan Calabrese
- Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Andreas M Rauschecker
- Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
6
|
Iqbal MS, Belal Bin Heyat M, Parveen S, Ammar Bin Hayat M, Roshanzamir M, Alizadehsani R, Akhtar F, Sayeed E, Hussain S, Hussein HS, Sawan M. Progress and trends in neurological disorders research based on deep learning. Comput Med Imaging Graph 2024; 116:102400. [PMID: 38851079 DOI: 10.1016/j.compmedimag.2024.102400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/07/2024] [Accepted: 05/13/2024] [Indexed: 06/10/2024]
Abstract
In recent years, deep learning (DL) has emerged as a powerful tool in clinical imaging, offering unprecedented opportunities for the diagnosis and treatment of neurological disorders (NDs). This comprehensive review explores the multifaceted role of DL techniques in leveraging vast datasets to advance our understanding of NDs and improve clinical outcomes. Beginning with a systematic literature review, we delve into the utilization of DL, particularly focusing on multimodal neuroimaging data analysis-a domain that has witnessed rapid progress and garnered significant scientific interest. Our study categorizes and critically analyses numerous DL models, including Convolutional Neural Networks (CNNs), LSTM-CNN, GAN, and VGG, to understand their performance across different types of Neurology Diseases. Through particular analysis, we identify key benchmarks and datasets utilized in training and testing DL models, shedding light on the challenges and opportunities in clinical neuroimaging research. Moreover, we discuss the effectiveness of DL in real-world clinical scenarios, emphasizing its potential to revolutionize ND diagnosis and therapy. By synthesizing existing literature and describing future directions, this review not only provides insights into the current state of DL applications in ND analysis but also covers the way for the development of more efficient and accessible DL techniques. Finally, our findings underscore the transformative impact of DL in reshaping the landscape of clinical neuroimaging, offering hope for enhanced patient care and groundbreaking discoveries in the field of neurology. This review paper is beneficial for neuropathologists and new researchers in this field.
Collapse
Affiliation(s)
- Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University of Azad Jammu & Kashmir, Bagh, Pakistan.
| | - Md Belal Bin Heyat
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| | - Saba Parveen
- College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China.
| | | | - Mohamad Roshanzamir
- Department of Computer Engineering, Faculty of Engineering, Fasa University, Fasa, Iran.
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation, Deakin University, VIC 3216, Australia.
| | - Faijan Akhtar
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China.
| | - Eram Sayeed
- Kisan Inter College, Dhaurahara, Kushinagar, India.
| | - Sadiq Hussain
- Department of Examination, Dibrugarh University, Assam 786004, India.
| | - Hany S Hussein
- Electrical Engineering Department, Faculty of Engineering, King Khalid University, Abha 61411, Saudi Arabia; Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan 81528, Egypt.
| | - Mohamad Sawan
- CenBRAIN Neurotech Center of Excellence, School of Engineering, Westlake University, Hangzhou, Zhejiang, China.
| |
Collapse
|
7
|
Jung E, Kong E, Yu D, Yang H, Chicontwe P, Park SH, Jeon I. Generation of synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks and conditional denoising diffusion probabilistic models based on simultaneous 18F-FDG PET/MR image data of pyogenic spondylodiscitis. Spine J 2024; 24:1467-1477. [PMID: 38615932 DOI: 10.1016/j.spinee.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/12/2024] [Accepted: 04/06/2024] [Indexed: 04/16/2024]
Abstract
BACKGROUND CONTEXT Cross-modality image generation from magnetic resonance (MR) to positron emission tomography (PET) using the generative model can be expected to have complementary effects by addressing the limitations and maximizing the advantages inherent in each modality. PURPOSE This study aims to generate synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks (GANs) and conditional denoising diffusion probabilistic models (cDDPMs) based on simultaneous 18F-fluorodeoxyglucose (18F-FDG) PET/MR image data. STUDY DESIGN Retrospective study with prospectively collected clinical and radiological data. PATIENT SAMPLE This study included 94 patients (60 men and 34 women) with thoraco-lumbar pyogenic spondylodiscitis (PSD) from February 2017 to January 2020 in a single tertiary institution. OUTCOME MEASURES Quantitative and qualitative image similarity were analyzed between the real and synthetic PET/ T2-weighted fat saturation MR (T2FS) fusion images on the test data set. METHODS We used paired spinal sagittal T2FS and PET/T2FS fusion images of simultaneous 18F-FDG PET/MR imaging examination in patients with PSD, which were employed to generate synthetic PET/T2FS fusion images from T2FS images using a combination of Pix2Pix (U-Net generator + Least Squares GANs discriminator) and cDDPMs algorithms. In the analyses of image similarity between the real and synthetic PET/T2FS fusion images, we adopted the values of mean peak signal to noise ratio (PSNR), mean structural similarity measurement (SSIM), mean absolute error (MAE), and mean squared error (MSE) for quantitative analysis, while the discrimination accuracy by three spine surgeons was applied for qualitative analysis. RESULTS Total of 2,082 pairs of T2FS and PET/T2FS fusion images were obtained from 172 examinations on 94 patients, which were randomly assigned to training, validation, and test data sets in 8:1:1 ratio (1664, 209, and 209 pairs). The quantitative analysis revealed PSNR of 30.634 ± 3.437, SSIM of 0.910 ± 0.067, MAE of 0.017 ± 0.008, and MSE of 0.001 ± 0.001, respectively. The values of PSNR, MAE, and MSE significantly decreased as FDG uptake increased in real PET/T2FS fusion image, with no significant correlation on SSIM. In the qualitative analysis, the overall discrimination accuracy between real and synthetic PET/T2FS fusion images was 47.4%. CONCLUSIONS The combination of Pix2Pix and cDDPMs demonstrated the potential for cross-modal image generation from MR to PET images, with reliable quantitative and qualitative image similarities.
Collapse
Affiliation(s)
- Euijin Jung
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
| | - Eunjung Kong
- Department of Nuclear Medicine, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea
| | - Dongwoo Yu
- Department of Neurosurgery, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea
| | - Heesung Yang
- School of Computer Science and Engineering, Kyungpook National University, Daegu, South Korea
| | - Philip Chicontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
| | - Ikchan Jeon
- Department of Neurosurgery, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea.
| |
Collapse
|
8
|
Li W, Zhao D, Zeng G, Chen Z, Huang Z, Lam S, Cheung ALY, Ren G, Liu C, Liu X, Lee FKH, Au KH, Lee VHF, Xie Y, Qin W, Cai J, Li T. Evaluating Virtual Contrast-Enhanced Magnetic Resonance Imaging in Nasopharyngeal Carcinoma Radiation Therapy: A Retrospective Analysis for Primary Gross Tumor Delineation. Int J Radiat Oncol Biol Phys 2024:S0360-3016(24)00750-8. [PMID: 38964419 DOI: 10.1016/j.ijrobp.2024.06.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/10/2024] [Accepted: 06/18/2024] [Indexed: 07/06/2024]
Abstract
PURPOSE To investigate the potential of virtual contrast-enhanced magnetic resonance imaging (VCE-MRI) for gross-tumor-volume (GTV) delineation of nasopharyngeal carcinoma (NPC) using multi-institutional data. METHODS AND MATERIALS This study retrospectively retrieved T1-weighted (T1w), T2-weighted (T2w) MRI, gadolinium-based contrast-enhanced MRI (CE-MRI), and planning computed tomography (CT) of 348 biopsy-proven NPC patients from 3 oncology centers. A multimodality-guided synergistic neural network (MMgSN-Net) was trained using 288 patients to leverage complementary features in T1w and T2w MRI for VCE-MRI synthesis, which was independently evaluated using 60 patients. Three board-certified radiation oncologists and 2 medical physicists participated in clinical evaluations in 3 aspects: image quality assessment of the synthetic VCE-MRI, VCE-MRI in assisting target volume delineation, and effectiveness of VCE-MRI-based contours in treatment planning. The image quality assessment includes distinguishability between VCE-MRI and CE-MRI, clarity of tumor-to-normal tissue interface, and veracity of contrast enhancement in tumor invasion risk areas. Primary tumor delineation and treatment planning were manually performed by radiation oncologists and medical physicists, respectively. RESULTS The mean accuracy to distinguish VCE-MRI from CE-MRI was 31.67%; no significant difference was observed in the clarity of tumor-to-normal tissue interface between VCE-MRI and CE-MRI; for the veracity of contrast enhancement in tumor invasion risk areas, an accuracy of 85.8% was obtained. The image quality assessment results suggest that the image quality of VCE-MRI is highly similar to real CE-MRI. The mean dosimetric difference of planning target volumes was less than 1 Gy. CONCLUSIONS The VCE-MRI is highly promising to replace the use of gadolinium-based CE-MRI in tumor delineation of NPC patients.
Collapse
Affiliation(s)
- Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Dan Zhao
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing, China
| | - Guangping Zeng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Zhi Chen
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Zhou Huang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing, China
| | - Saikit Lam
- Research Institute for Smart Aging, The Hong Kong Polytechnic University, Hong Kong SAR, China; Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Andy Lai-Yin Cheung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Xi Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Francis Kar-Ho Lee
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong SAR, China
| | - Kwok-Hung Au
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong SAR, China
| | - Victor Ho-Fun Lee
- Department of Clinical Oncology, The University of Hong Kong, Hong Kong SAR, China
| | - Yaoqin Xie
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, China
| | - Wenjian Qin
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China; Research Institute for Smart Aging, The Hong Kong Polytechnic University, Hong Kong SAR, China; The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China.
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China.
| |
Collapse
|
9
|
Ma Q, Liu Z, Zhang J, Fu C, Li R, Sun Y, Tong T, Gu Y. Multi-task reconstruction network for synthetic diffusion kurtosis imaging: Predicting neoadjuvant chemoradiotherapy response in locally advanced rectal cancer. Eur J Radiol 2024; 174:111402. [PMID: 38461737 DOI: 10.1016/j.ejrad.2024.111402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/12/2024] [Accepted: 02/29/2024] [Indexed: 03/12/2024]
Abstract
PURPOSE To assess the feasibility and clinical value of synthetic diffusion kurtosis imaging (DKI) generated from diffusion weighted imaging (DWI) through multi-task reconstruction network (MTR-Net) for tumor response prediction in patients with locally advanced rectal cancer (LARC). METHODS In this retrospective study, 120 eligible patients with LARC were enrolled and randomly divided into training and testing datasets with a 7:3 ratio. The MTR-Net was developed for reconstructing Dapp and Kapp images from apparent diffusion coefficient (ADC) images. Tumor regions were manually segmented on both true and synthetic DKI images. The synthetic image quality and manual segmentation agreement were quantitatively assessed. The support vector machine (SVM) classifier was used to construct radiomics models based on the true and synthetic DKI images for pathological complete response (pCR) prediction. The prediction performance for the models was evaluated by the receiver operating characteristic (ROC) curve analysis. RESULTS The mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for tumor regions were 0.212, 24.278, and 0.853, respectively, for the synthetic Dapp images and 0.516, 24.883, and 0.804, respectively, for the synthetic Kapp images. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), and Hausdorff distance (HD) for the manually segmented tumor regions were 0.786, 0.844, 0.755, and 0.582, respectively. For predicting pCR, the true and synthetic DKI-based radiomics models achieved area under the curve (AUC) values of 0.825 and 0.807 in the testing datasets, respectively. CONCLUSIONS Generating synthetic DKI images from DWI images using MTR-Net is feasible, and the efficiency of synthetic DKI images in predicting pCR is comparable to that of true DKI images.
Collapse
Affiliation(s)
- Qiong Ma
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China
| | - Zonglin Liu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China
| | - Jiadong Zhang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong 999077, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Caixia Fu
- MR Application Development, Siemens Shenzhen Magnetic Resonance Ltd., Shenzhen 518057, China
| | - Rong Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China
| | - Yiqun Sun
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China.
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China.
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China.
| |
Collapse
|
10
|
Hussein R, Shin D, Zhao MY, Guo J, Davidzon G, Steinberg G, Moseley M, Zaharchuk G. Turning brain MRI into diagnostic PET: 15O-water PET CBF synthesis from multi-contrast MRI via attention-based encoder-decoder networks. Med Image Anal 2024; 93:103072. [PMID: 38176356 PMCID: PMC10922206 DOI: 10.1016/j.media.2023.103072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/20/2023] [Accepted: 12/20/2023] [Indexed: 01/06/2024]
Abstract
Accurate quantification of cerebral blood flow (CBF) is essential for the diagnosis and assessment of a wide range of neurological diseases. Positron emission tomography (PET) with radiolabeled water (15O-water) is the gold-standard for the measurement of CBF in humans, however, it is not widely available due to its prohibitive costs and the use of short-lived radiopharmaceutical tracers that require onsite cyclotron production. Magnetic resonance imaging (MRI), in contrast, is more accessible and does not involve ionizing radiation. This study presents a convolutional encoder-decoder network with attention mechanisms to predict the gold-standard 15O-water PET CBF from multi-contrast MRI scans, thus eliminating the need for radioactive tracers. The model was trained and validated using 5-fold cross-validation in a group of 126 subjects consisting of healthy controls and cerebrovascular disease patients, all of whom underwent simultaneous 15O-water PET/MRI. The results demonstrate that the model can successfully synthesize high-quality PET CBF measurements (with an average SSIM of 0.924 and PSNR of 38.8 dB) and is more accurate compared to concurrent and previous PET synthesis methods. We also demonstrate the clinical significance of the proposed algorithm by evaluating the agreement for identifying the vascular territories with impaired CBF. Such methods may enable more widespread and accurate CBF evaluation in larger cohorts who cannot undergo PET imaging due to radiation concerns, lack of access, or logistic challenges.
Collapse
Affiliation(s)
- Ramy Hussein
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| | - David Shin
- Global MR Applications & Workflow, GE Healthcare, Menlo Park, CA 94025, USA
| | - Moss Y Zhao
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA; Stanford Cardiovascular Institute, Stanford University, Stanford, CA 94305, USA
| | - Jia Guo
- Department of Bioengineering, University of California, Riverside, CA 92521, USA
| | - Guido Davidzon
- Division of Nuclear Medicine, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Gary Steinberg
- Department of Neurosurgery, Stanford University, Stanford, CA 94304, USA
| | - Michael Moseley
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Greg Zaharchuk
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
11
|
Foltyn-Dumitru M, Schell M, Rastogi A, Sahm F, Kessler T, Wick W, Bendszus M, Brugnara G, Vollmuth P. Impact of signal intensity normalization of MRI on the generalizability of radiomic-based prediction of molecular glioma subtypes. Eur Radiol 2024; 34:2782-2790. [PMID: 37672053 PMCID: PMC10957611 DOI: 10.1007/s00330-023-10034-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/09/2023] [Accepted: 06/16/2023] [Indexed: 09/07/2023]
Abstract
OBJECTIVES Radiomic features have demonstrated encouraging results for non-invasive detection of molecular biomarkers, but the lack of guidelines for pre-processing MRI-data has led to poor generalizability. Here, we assessed the influence of different MRI-intensity normalization techniques on the performance of radiomics-based models for predicting molecular glioma subtypes. METHODS Preoperative MRI-data from n = 615 patients with newly diagnosed glioma and known isocitrate dehydrogenase (IDH) and 1p/19q status were pre-processed using four different methods: no normalization (naive), N4 bias field correction (N4), N4 followed by either WhiteStripe (N4/WS), or z-score normalization (N4/z-score). A total of 377 Image-Biomarker-Standardisation-Initiative-compliant radiomic features were extracted from each normalized data, and 9 different machine-learning algorithms were trained for multiclass prediction of molecular glioma subtypes (IDH-mutant 1p/19q codeleted vs. IDH-mutant 1p/19q non-codeleted vs. IDH wild type). External testing was performed in public glioma datasets from UCSF (n = 410) and TCGA (n = 160). RESULTS Support vector machine yielded the best performance with macro-average AUCs of 0.84 (naive), 0.84 (N4), 0.87 (N4/WS), and 0.87 (N4/z-score) in the internal test set. Both N4/WS and z-score outperformed the other approaches in the external UCSF and TCGA test sets with macro-average AUCs ranging from 0.85 to 0.87, replicating the performance of the internal test set, in contrast to macro-average AUCs ranging from 0.19 to 0.45 for naive and 0.26 to 0.52 for N4 alone. CONCLUSION Intensity normalization of MRI data is essential for the generalizability of radiomic-based machine-learning models. Specifically, both N4/WS and N4/z-score approaches allow to preserve the high model performance, yielding generalizable performance when applying the developed radiomic-based machine-learning model in an external heterogeneous, multi-institutional setting. CLINICAL RELEVANCE STATEMENT Intensity normalization such as N4/WS or N4/z-score can be used to develop reliable radiomics-based machine learning models from heterogeneous multicentre MRI datasets and provide non-invasive prediction of glioma subtypes. KEY POINTS • MRI-intensity normalization increases the stability of radiomics-based models and leads to better generalizability. • Intensity normalization did not appear relevant when the developed model was applied to homogeneous data from the same institution. • Radiomic-based machine learning algorithms are a promising approach for simultaneous classification of IDH and 1p/19q status of glioma.
Collapse
Affiliation(s)
- Martha Foltyn-Dumitru
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
- Section for Computational Neuroimaging, Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
- Section for Computational Neuroimaging, Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Aditya Rastogi
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
- Section for Computational Neuroimaging, Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Felix Sahm
- Department of Neuropathology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Tobias Kessler
- Department of Neurology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Wolfgang Wick
- Department of Neurology, Heidelberg University Hospital, Heidelberg, DE, Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Research Center (DKFZ), Heidelberg, DE, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
- Section for Computational Neuroimaging, Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany.
- Section for Computational Neuroimaging, Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, DE, Germany.
- Division of Medical Image Computing (MIC), German Cancer Research Center (DFKZ), Heidelberg, Germany.
| |
Collapse
|
12
|
Wang B, Liu Y, Zhang J, Yin S, Liu B, Ding S, Qiu B, Deng X. Evaluating contouring accuracy and dosimetry impact of current MRI-guided adaptive radiation therapy for brain metastases: a retrospective study. J Neurooncol 2024; 167:123-132. [PMID: 38300388 PMCID: PMC10978730 DOI: 10.1007/s11060-024-04583-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 01/22/2024] [Indexed: 02/02/2024]
Abstract
BACKGROUND Magnetic resonance imaging (MRI) guided adaptive radiotherapy (MRgART) has gained increasing attention, showing clinical advantages over conventional radiotherapy. However, there are concerns regarding online target delineation and modification accuracy. In our study, we aimed to investigate the accuracy of brain metastases (BMs) contouring and its impact on dosimetry in 1.5 T MRI-guided online adaptive fractionated stereotactic radiotherapy (FSRT). METHODS Eighteen patients with 64 BMs were retrospectively evaluated. Pre-treatment 3.0 T MRI scans (gadolinium contrast-enhanced T1w, T1c) and initial 1.5 T MR-Linac scans (non-enhanced online-T1, T2, and FLAIR) were used for gross target volume (GTV) contouring. Five radiation oncologists independently contoured GTVs on pre-treatment T1c and initial online-T1, T2, and FLAIR images. We assessed intra-observer and inter-observer variations and analysed the dosimetry impact through treatment planning based on GTVs generated by online MRI, simulating the current online adaptive radiotherapy practice. RESULTS The average Dice Similarity Coefficient (DSC) for inter-observer comparison were 0.79, 0.54, 0.59, and 0.64 for pre-treatment T1c, online-T1, T2, and FLAIR, respectively. Inter-observer variations were significantly smaller for the 3.0 T pre-treatment T1c than for the contrast-free online 1.5 T MR scans (P < 0.001). Compared to the T1c contours, the average DSC index of intra-observer contouring was 0.52‒0.55 for online MRIs. For BMs larger than 3 cm3, visible on all image sets, the average DSC indices were 0.69, 0.71 and 0.64 for online-T1, T2, and FLAIR, respectively, compared to the pre-treatment T1c contour. For BMs < 3 cm3, the average visibility rates were 22.3%, 41.3%, and 51.8% for online-T1, T2, and FLAIR, respectively. Simulated adaptive planning showed an average prescription dose coverage of 63.4‒66.9% when evaluated by ground truth planning target volumes (PTVs) generated on pre-treatment T1c, reducing it from over 99% coverage by PTVs generated on online MRIs. CONCLUSIONS The accuracy of online target contouring was unsatisfactory for the current MRI-guided online adaptive FSRT. Small lesions had poor visibility on 1.5 T non-contrast-enhanced MR-Linac images. Contour inaccuracies caused a one-third drop in prescription dose coverage for the target volume. Future studies should explore the feasibility of contrast agent administration during daily treatment in MRI-guided online adaptive FSRT procedures.
Collapse
Affiliation(s)
- Bin Wang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China
| | - Yimei Liu
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China
| | - Jun Zhang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China
| | - Shaohan Yin
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, People's Republic of China
| | - Biaoshui Liu
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China
| | - Shouliang Ding
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China
| | - Bo Qiu
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China.
| | - Xiaowu Deng
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, 651 East Dongfeng Road, Guangzhou, Guangdong, 510060, People's Republic of China.
| |
Collapse
|
13
|
Barkhof F, Parker GJ. The need for speed: recovering undersampled MRI scans for glioma imaging. Lancet Oncol 2024; 25:274-275. [PMID: 38423043 DOI: 10.1016/s1470-2045(24)00036-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 01/17/2024] [Accepted: 01/18/2024] [Indexed: 03/02/2024]
Affiliation(s)
- Frederik Barkhof
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1V 6BH, UK.
| | - Geoff Jm Parker
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1V 6BH, UK
| |
Collapse
|
14
|
Tatekawa H, Ueda D, Takita H, Matsumoto T, Walston SL, Mitsuyama Y, Horiuchi D, Matsushita S, Oura T, Tomita Y, Tsukamoto T, Shimono T, Miki Y. Deep learning-based diffusion tensor image generation model: a proof-of-concept study. Sci Rep 2024; 14:2911. [PMID: 38316892 PMCID: PMC10844503 DOI: 10.1038/s41598-024-53278-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 01/29/2024] [Indexed: 02/07/2024] Open
Abstract
This study created an image-to-image translation model that synthesizes diffusion tensor images (DTI) from conventional diffusion weighted images, and validated the similarities between the original and synthetic DTI. Thirty-two healthy volunteers were prospectively recruited. DTI and DWI were obtained with six and three directions of the motion probing gradient (MPG), respectively. The identical imaging plane was paired for the image-to-image translation model that synthesized one direction of the MPG from DWI. This process was repeated six times in the respective MPG directions. Regions of interest (ROIs) in the lentiform nucleus, thalamus, posterior limb of the internal capsule, posterior thalamic radiation, and splenium of the corpus callosum were created and applied to maps derived from the original and synthetic DTI. The mean values and signal-to-noise ratio (SNR) of the original and synthetic maps for each ROI were compared. The Bland-Altman plot between the original and synthetic data was evaluated. Although the test dataset showed a larger standard deviation of all values and lower SNR in the synthetic data than in the original data, the Bland-Altman plots showed each plot localizing in a similar distribution. Synthetic DTI could be generated from conventional DWI with an image-to-image translation model.
Collapse
Affiliation(s)
- Hiroyuki Tatekawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan.
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Hirotaka Takita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Daisuke Horiuchi
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Shu Matsushita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Tatsushi Oura
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Yuichiro Tomita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Taro Tsukamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| |
Collapse
|
15
|
Youssef G, Wen PY. Updated Response Assessment in Neuro-Oncology (RANO) for Gliomas. Curr Neurol Neurosci Rep 2024; 24:17-25. [PMID: 38170429 DOI: 10.1007/s11910-023-01329-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/11/2023] [Indexed: 01/05/2024]
Abstract
PURPOSE OF REVIEW The response assessment in Neuro-Oncology (RANO) criteria and its versions were developed by expert opinion consensus to standardize response evaluation in glioma clinical trials. New patient-based data informed the development of updated response assessment criteria, RANO 2.0. RECENT FINDINGS In a recent study of patients with glioblastoma, the post-radiation brain MRI was a superior baseline MRI compared to the pretreatment MRI, and confirmation scans were only beneficial within the first 12 weeks of completion of radiation in newly diagnosed disease. Nonenhancing disease evaluation did not improve the correlation between progression-free survival and overall survival in newly diagnosed and recurrent settings. RANO 2.0 recommends a single common response criteria for high- and low-grade gliomas, regardless of the treatment modality being evaluated. It also provides guidance on the evaluation of nonenhancing tumors and tumors with both enhancing and nonenhancing components.
Collapse
Affiliation(s)
- Gilbert Youssef
- Center for Neuro-Oncology, Dana-Farber Cancer Institute, 450 Brookline Avenue, Boston, MA, 02215, USA
- Division of Neuro-Oncology, Department of Neurology, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Patrick Y Wen
- Center for Neuro-Oncology, Dana-Farber Cancer Institute, 450 Brookline Avenue, Boston, MA, 02215, USA.
- Division of Neuro-Oncology, Department of Neurology, Brigham and Women's Hospital, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
16
|
Wamelink IJHG, Azizova A, Booth TC, Mutsaerts HJMM, Ogunleye A, Mankad K, Petr J, Barkhof F, Keil VC. Brain Tumor Imaging without Gadolinium-based Contrast Agents: Feasible or Fantasy? Radiology 2024; 310:e230793. [PMID: 38319162 PMCID: PMC10902600 DOI: 10.1148/radiol.230793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 08/07/2023] [Accepted: 08/14/2023] [Indexed: 02/07/2024]
Abstract
Gadolinium-based contrast agents (GBCAs) form the cornerstone of current primary brain tumor MRI protocols at all stages of the patient journey. Though an imperfect measure of tumor grade, GBCAs are repeatedly used for diagnosis and monitoring. In practice, however, radiologists will encounter situations where GBCA injection is not needed or of doubtful benefit. Reducing GBCA administration could improve the patient burden of (repeated) imaging (especially in vulnerable patient groups, such as children), minimize risks of putative side effects, and benefit costs, logistics, and the environmental footprint. On the basis of the current literature, imaging strategies to reduce GBCA exposure for pediatric and adult patients with primary brain tumors will be reviewed. Early postoperative MRI and fixed-interval imaging of gliomas are examples of GBCA exposure with uncertain survival benefits. Half-dose GBCAs for gliomas and T2-weighted imaging alone for meningiomas are among options to reduce GBCA use. While most imaging guidelines recommend using GBCAs at all stages of diagnosis and treatment, non-contrast-enhanced sequences, such as the arterial spin labeling, have shown a great potential. Artificial intelligence methods to generate synthetic postcontrast images from decreased-dose or non-GBCA scans have shown promise to replace GBCA-dependent approaches. This review is focused on pediatric and adult gliomas and meningiomas. Special attention is paid to the quality and real-life applicability of the reviewed literature.
Collapse
Affiliation(s)
- Ivar J. H. G. Wamelink
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Aynur Azizova
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Thomas C. Booth
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Henk J. M. M. Mutsaerts
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Afolabi Ogunleye
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Kshitij Mankad
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Jan Petr
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Frederik Barkhof
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| | - Vera C. Keil
- From the Department of Radiology and Nuclear Medicine, Amsterdam
University Medical Center, VUMC Site, De Boelelaan 1117, Amsterdam 1081 HV, the
Netherlands (I.J.H.G.W., A.A., H.J.M.M.M., J.P., F.B., V.C.K.); Department of
Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, the Netherlands
(I.J.H.G.W., A.A., H.J.M.M.M., V.C.K.); School of Biomedical Engineering and
Imaging Sciences, King’s College London, London, United Kingdom (T.C.B.);
Department of Neuroradiology, King’s College Hospital, NHS Foundation
Trust, London, UK (T.C.B.); Department of Brain Imaging, Amsterdam Neuroscience,
Amsterdam, the Netherlands (H.J.M.M.M., F.B., V.C.K.); Department of Radiology,
Lagos State University Teaching Hospital, Ikeja, Nigeria Radiology (A.O.);
Department of Radiology, Great Ormond Street Hospital for Children, NHS
Foundation Trust, London, United Kingdom (K.M.); Institute of
Radiopharmaceutical Cancer Research, Helmholtz-Zentrum Dresden-Rossendorf,
Dresden, Germany (J.P.); and Queen Square Institute of Neurology and Centre for
Medical Image Computing, University College London, London, United Kingdom
(F.B.)
| |
Collapse
|
17
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
18
|
Raut P, Baldini G, Schöneck M, Caldeira L. Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors. FRONTIERS IN RADIOLOGY 2024; 3:1336902. [PMID: 38304344 PMCID: PMC10830800 DOI: 10.3389/fradi.2023.1336902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 12/28/2023] [Indexed: 02/03/2024]
Abstract
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
Collapse
Affiliation(s)
- P. Raut
- Department of Pediatric Pulmonology, Erasmus Medical Center, Rotterdam, Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Rotterdam, Netherlands
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - G. Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - M. Schöneck
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - L. Caldeira
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
19
|
Liao Y, Bai R, Shatz DY, Weiss JP, Zawaneh M, Tung R, Su W. Initial clinical experience of atrial fibrillation ablation guided by a cryoballoon-compatible, magnetic-based circular catheter. J Cardiovasc Electrophysiol 2024; 35:111-119. [PMID: 37962236 DOI: 10.1111/jce.16124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 10/06/2023] [Accepted: 10/26/2023] [Indexed: 11/15/2023]
Abstract
INTRODUCTION The circular catheter compatible with current cryoballoon system for atrial fibrillation (AF) ablation is exclusively sensed by impedance-based electro-anatomical mapping (EAM) system, limiting the accuracy of maps. We aim to investigate the feasibility and safety of a magnetic-based circular mapping catheter for AF ablation with cryoballoon. METHODS Nineteen consecutive patients who underwent pulmonary vein isolation (PVI) with cryoballoon for paroxysmal or persistent AF were included. EAMs of left atrium (LA) created by the LASSOSTAR™NAV catheter (Lassostar map) before and after PVI were compared to that generated by a high-density mapping catheter (Pentaray map) from different aspects including structural similarity, PV angle, LA posterior wall (LAPW) and low voltage areas (LVAs), and the amplitude of far field electrograms (FFEs) recorded by catheters. RESULTS All patients had successful PVI without major complications. With similar mapping time and density, the LA volume calculated from the Pentaray map and Lassostar map were comparable. There were no significant differences in PV angle of all PVs and PW area (16.8 ± 3.2 vs. 17.1 ± 2.8, p = .516) between Pentaray map and Lassostar map. High structural similarity score was observed between two maps (0.783 in RAO/LAO view and 0.791 in PA view). Lassostar map detected lesser but not statistically significant extension of LVA (13.9% vs. 18.3%, p = .07). Amplitude of FFE was larger at the right superior PV on Lassostar map (0.21 ± 0.16 vs. 0.14 ± 0.11 mV, p = .041) compared to that on the Pentaray map. CONCLUSION In our initial experience, PVI with cryoballoon and magnetic-based circular LASSOSTAR™NAV catheter was safe and effective based on the accurate LA geometry it created.
Collapse
Affiliation(s)
- Yu Liao
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
- Department of Internal Medicine, Division of Cardiology, National Cheng Kung University Hospital, Tainan, Taiwan
| | - Rong Bai
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
| | - Dalise Yi Shatz
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
| | - J Peter Weiss
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
| | - Michael Zawaneh
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
| | - Roderick Tung
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
| | - Wilber Su
- Division of Cardiology, Banner - University Medical Center Phoenix, University of Arizona, College of Medicine, Phoenix, Arizona, USA
| |
Collapse
|
20
|
Smith CM, Weathers AL, Lewis SL. An overview of clinical machine learning applications in neurology. J Neurol Sci 2023; 455:122799. [PMID: 37979413 DOI: 10.1016/j.jns.2023.122799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 10/26/2023] [Accepted: 11/12/2023] [Indexed: 11/20/2023]
Abstract
Machine learning techniques for clinical applications are evolving, and the potential impact this will have on clinical neurology is important to recognize. By providing a broad overview on this growing paradigm of clinical tools, this article aims to help healthcare professionals in neurology prepare to navigate both the opportunities and challenges brought on through continued advancements in machine learning. This narrative review first elaborates on how machine learning models are organized and implemented. Machine learning tools are then classified by clinical application, with examples of uses within neurology described in more detail. Finally, this article addresses limitations and considerations regarding clinical machine learning applications in neurology.
Collapse
Affiliation(s)
- Colin M Smith
- Lehigh Valley Fleming Neuroscience Institute, 1250 S Cedar Crest Blvd., Allentown, PA 18103, USA
| | - Allison L Weathers
- Cleveland Clinic Information Technology Division, 9500 Euclid Ave. Cleveland, OH 44195, USA
| | - Steven L Lewis
- Lehigh Valley Fleming Neuroscience Institute, 1250 S Cedar Crest Blvd., Allentown, PA 18103, USA.
| |
Collapse
|
21
|
Cao K, Xia Y, Yao J, Han X, Lambert L, Zhang T, Tang W, Jin G, Jiang H, Fang X, Nogues I, Li X, Guo W, Wang Y, Fang W, Qiu M, Hou Y, Kovarnik T, Vocka M, Lu Y, Chen Y, Chen X, Liu Z, Zhou J, Xie C, Zhang R, Lu H, Hager GD, Yuille AL, Lu L, Shao C, Shi Y, Zhang Q, Liang T, Zhang L, Lu J. Large-scale pancreatic cancer detection via non-contrast CT and deep learning. Nat Med 2023; 29:3033-3043. [PMID: 37985692 PMCID: PMC10719100 DOI: 10.1038/s41591-023-02640-w] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 10/12/2023] [Indexed: 11/22/2023]
Abstract
Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986-0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.
Collapse
Affiliation(s)
- Kai Cao
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Yingda Xia
- DAMO Academy, Alibaba Group, New York, NY, USA
| | - Jiawen Yao
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Xu Han
- Department of Hepatobiliary and Pancreatic Surgery, First Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lukas Lambert
- Department of Radiology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Tingting Zhang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Tang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Gang Jin
- Department of Surgery, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Hui Jiang
- Department of Pathology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Xu Fang
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Isabella Nogues
- Department of Biostatistics, Harvard University T.H. Chan School of Public Health, Cambridge, MA, USA
| | - Xuezhou Li
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Wenchao Guo
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Yu Wang
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Wei Fang
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Mingyan Qiu
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Tomas Kovarnik
- Department of Invasive Cardiology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Michal Vocka
- Department of Oncology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Yimei Lu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yingli Chen
- Department of Surgery, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Xin Chen
- Department of Radiology, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Jian Zhou
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hong Lu
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Gregory D Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Alan L Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Le Lu
- DAMO Academy, Alibaba Group, New York, NY, USA
| | - Chengwei Shao
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China.
| | - Yu Shi
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China.
| | - Qi Zhang
- Department of Hepatobiliary and Pancreatic Surgery, First Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | - Tingbo Liang
- Department of Hepatobiliary and Pancreatic Surgery, First Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | - Ling Zhang
- DAMO Academy, Alibaba Group, New York, NY, USA.
| | - Jianping Lu
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China.
| |
Collapse
|
22
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
23
|
Lyu J, Fu Y, Yang M, Xiong Y, Duan Q, Duan C, Wang X, Xing X, Zhang D, Lin J, Luo C, Ma X, Bian X, Hu J, Li C, Huang J, Zhang W, Zhang Y, Su S, Lou X. Generative Adversarial Network-based Noncontrast CT Angiography for Aorta and Carotid Arteries. Radiology 2023; 309:e230681. [PMID: 37962500 DOI: 10.1148/radiol.230681] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Background Iodinated contrast agents (ICAs), which are widely used in CT angiography (CTA), may cause adverse effects in humans, and their use is time-consuming and costly. Purpose To develop an ICA-free deep learning imaging model for synthesizing CTA-like images and to assess quantitative and qualitative image quality as well as the diagnostic accuracy of synthetic CTA (Syn-CTA) images. Materials and Methods A generative adversarial network (GAN)-based CTA imaging model was trained, validated, and tested on retrospectively collected pairs of noncontrast CT and CTA images of the neck and abdomen from January 2017 to June 2022, and further validated on an external data set. Syn-CTA image quality was evaluated using quantitative metrics. In addition, two senior radiologists scored the visual quality on a three-point scale (3 = good) and determined the vascular diagnosis. The validity of Syn-CTA images was evaluated by comparing the visual quality scores and diagnostic accuracy of aortic and carotid artery disease between Syn-CTA and real CTA scans. Results CT scans from 1749 patients (median age, 60 years [IQR, 50-68 years]; 1057 male patients) were included in the internal data set: 1137 for training, 400 for validation, and 212 for testing. The external validation set comprised CT scans from 42 patients (median age, 67 years [IQR, 59-74 years]; 37 male patients). Syn-CTA images had high similarity to real CTA images (normalized mean absolute error, 0.011 and 0.013 for internal and external test set, respectively; peak signal-to-noise ratio, 32.07 dB and 31.58 dB; structural similarity, 0.919 and 0.906). The visual quality of Syn-CTA and real CTA images was comparable (internal test set, P = .35; external validation set, P > .99). Syn-CTA showed reasonable to good diagnostic accuracy for vascular diseases (internal test set: accuracy = 94%, macro F1 score = 91%; external validation set: accuracy = 86%, macro F1 score = 83%). Conclusion A GAN-based model that synthesizes neck and abdominal CTA-like images without the use of ICAs shows promise in vascular diagnosis compared with real CTA images. Clinical trial registration no. NCT05471869 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Zhang and Turkbey in this issue.
Collapse
Affiliation(s)
- Jinhao Lyu
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Ying Fu
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Mingliang Yang
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Yongqin Xiong
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Qi Duan
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Caohui Duan
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Xueyang Wang
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Xinbo Xing
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Dong Zhang
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Jiaji Lin
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Chuncai Luo
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Xiaoxiao Ma
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Xiangbing Bian
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Jianxing Hu
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Chenxi Li
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Jiayu Huang
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Wei Zhang
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Yue Zhang
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Sulian Su
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| | - Xin Lou
- From the Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing 100853, China (J. Lyu, Y.X., Q.D., C.D., X.W., X.X., D.Z., J. Lin, C. Luo, X.M., X.B., J. Hu, C. Li, J. Huang, X.L.); College of Medical Technology, School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China (Y.F., M.Y., X.L.); Department of Radiology, Brain Hospital of Hunan Province, Changsha, China (W.Z.); Department of Radiology, Xiangyang No. 1 People's Hospital, Hubei University of Medicine, Xiangyang, China (Y.Z.); and Department of Radiology, Xiamen Humanity Hospital, Xiamen, China (S.S.)
| |
Collapse
|
24
|
Mallio CA, Radbruch A, Deike-Hofmann K, van der Molen AJ, Dekkers IA, Zaharchuk G, Parizel PM, Beomonte Zobel B, Quattrocchi CC. Artificial Intelligence to Reduce or Eliminate the Need for Gadolinium-Based Contrast Agents in Brain and Cardiac MRI: A Literature Review. Invest Radiol 2023; 58:746-753. [PMID: 37126454 DOI: 10.1097/rli.0000000000000983] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
ABSTRACT Brain and cardiac MRIs are fundamental noninvasive imaging tools, which can provide important clinical information and can be performed without or with gadolinium-based contrast agents (GBCAs), depending on the clinical indication. It is currently a topic of debate whether it would be feasible to extract information such as standard gadolinium-enhanced MRI while injecting either less or no GBCAs. Artificial intelligence (AI) is a great source of innovation in medical imaging and has been explored as a method to synthesize virtual contrast MR images, potentially yielding similar diagnostic performance without the need to administer GBCAs. If possible, there would be significant benefits, including reduction of costs, acquisition time, and environmental impact with respect to conventional contrast-enhanced MRI examinations. Given its promise, we believe additional research is needed to increase the evidence to make these AI solutions feasible, reliable, and robust enough to be integrated into the clinical framework. Here, we review recent AI studies aimed at reducing or replacing gadolinium in brain and cardiac imaging while maintaining diagnostic image quality.
Collapse
Affiliation(s)
| | - Alexander Radbruch
- Clinic for Diagnostic and Interventional Neuroradiology, University Clinic Bonn, and German Center for Neurodegenerative Diseases, DZNE, Bonn, Germany
| | - Katerina Deike-Hofmann
- Clinic for Diagnostic and Interventional Neuroradiology, University Clinic Bonn, and German Center for Neurodegenerative Diseases, DZNE, Bonn, Germany
| | - Aart J van der Molen
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Ilona A Dekkers
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA
| | | | | | | |
Collapse
|
25
|
Li Y, Dong B, Yuan P. The diagnostic value of machine learning for the classification of malignant bone tumor: a systematic evaluation and meta-analysis. Front Oncol 2023; 13:1207175. [PMID: 37746301 PMCID: PMC10513372 DOI: 10.3389/fonc.2023.1207175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/23/2023] [Indexed: 09/26/2023] Open
Abstract
Background Malignant bone tumors are a type of cancer with varying malignancy and prognosis. Accurate diagnosis and classification are crucial for treatment and prognosis assessment. Machine learning has been introduced for early differential diagnosis of malignant bone tumors, but its performance is controversial. This systematic review and meta-analysis aims to explore the diagnostic value of machine learning for malignant bone tumors. Methods PubMed, Embase, Cochrane Library, and Web of Science were searched for literature on machine learning in the differential diagnosis of malignant bone tumors up to October 31, 2022. The risk of bias assessment was conducted using QUADAS-2. A bivariate mixed-effects model was used for meta-analysis, with subgroup analyses by machine learning methods and modeling approaches. Results The inclusion comprised 31 publications with 382,371 patients, including 141,315 with malignant bone tumors. Meta-analysis results showed machine learning sensitivity and specificity of 0.87 [95% CI: 0.81,0.91] and 0.91 [95% CI: 0.86,0.94] in the training set, and 0.83 [95% CI: 0.74,0.89] and 0.87 [95% CI: 0.79,0.92] in the validation set. Subgroup analysis revealed MRI-based radiomics was the most common approach, with sensitivity and specificity of 0.85 [95% CI: 0.74,0.91] and 0.87 [95% CI: 0.81,0.91] in the training set, and 0.79 [95% CI: 0.70,0.86] and 0.79 [95% CI: 0.70,0.86] in the validation set. Convolutional neural networks were the most common model type, with sensitivity and specificity of 0.86 [95% CI: 0.72,0.94] and 0.92 [95% CI: 0.82,0.97] in the training set, and 0.87 [95% CI: 0.51,0.98] and 0.87 [95% CI: 0.69,0.96] in the validation set. Conclusion Machine learning is mainly applied in radiomics for diagnosing malignant bone tumors, showing desirable diagnostic performance. Machine learning can be an early adjunctive diagnostic method but requires further research and validation to determine its practical efficiency and clinical application prospects. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023387057.
Collapse
Affiliation(s)
| | - Bo Dong
- Department of Orthopedics, Xi’an Honghui Hospital, Xi’an Jiaotong University, Xi’an Shaanxi, China
| | | |
Collapse
|
26
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
27
|
Haase R, Pinetz T, Kobler E, Paech D, Effland A, Radbruch A, Deike-Hofmann K. Artificial Contrast: Deep Learning for Reducing Gadolinium-Based Contrast Agents in Neuroradiology. Invest Radiol 2023; 58:539-547. [PMID: 36822654 DOI: 10.1097/rli.0000000000000963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
ABSTRACT Deep learning approaches are playing an ever-increasing role throughout diagnostic medicine, especially in neuroradiology, to solve a wide range of problems such as segmentation, synthesis of missing sequences, and image quality improvement. Of particular interest is their application in the reduction of gadolinium-based contrast agents, the administration of which has been under cautious reevaluation in recent years because of concerns about gadolinium deposition and its unclear long-term consequences. A growing number of studies are investigating the reduction (low-dose approach) or even complete substitution (zero-dose approach) of gadolinium-based contrast agents in diverse patient populations using a variety of deep learning methods. This work aims to highlight selected research and discusses the advantages and limitations of recent deep learning approaches, the challenges of assessing its output, and the progress toward clinical applicability distinguishing between the low-dose and zero-dose approach.
Collapse
Affiliation(s)
| | - Thomas Pinetz
- Institute of Applied Mathematics, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| | - Erich Kobler
- From the Department of Neuroradiology, University Medical Center Bonn, Rheinische Friedrich-Wilhelms-Universität Bonn
| | | | - Alexander Effland
- Institute of Applied Mathematics, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| | | | | |
Collapse
|
28
|
Schlaeger S, Li HB, Baum T, Zimmer C, Moosbauer J, Byas S, Mühlau M, Wiestler B, Finck T. Longitudinal Assessment of Multiple Sclerosis Lesion Load With Synthetic Magnetic Resonance Imaging-A Multicenter Validation Study. Invest Radiol 2023; 58:320-326. [PMID: 36730638 DOI: 10.1097/rli.0000000000000938] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Double inversion recovery (DIR) has been validated as a sensitive magnetic resonance imaging (MRI) contrast in multiple sclerosis (MS). Deep learning techniques can use basic input data to generate synthetic DIR (synthDIR) images that are on par with their acquired counterparts. As assessment of longitudinal MRI data is paramount in MS diagnostics, our study's purpose is to evaluate the utility of synthDIR longitudinal subtraction imaging for detection of disease progression in a multicenter data set of MS patients. METHODS We implemented a previously established generative adversarial network to synthesize DIR from input T1-weighted and fluid-attenuated inversion recovery (FLAIR) sequences for 214 MRI data sets from 74 patients and 5 different centers. One hundred and forty longitudinal subtraction maps of consecutive scans (follow-up scan-preceding scan) were generated for both acquired FLAIR and synthDIR. Two readers, blinded to the image origin, independently quantified newly formed lesions on the FLAIR and synthDIR subtraction maps, grouped into specific locations as outlined in the McDonald criteria. RESULTS Both readers detected significantly more newly formed MS-specific lesions in the longitudinal subtractions of synthDIR compared with acquired FLAIR (R1: 3.27 ± 0.60 vs 2.50 ± 0.69 [ P = 0.0016]; R2: 3.31 ± 0.81 vs 2.53 ± 0.72 [ P < 0.0001]). Relative gains in detectability were most pronounced in juxtacortical lesions (36% relative gain in lesion counts-pooled for both readers). In 5% of the scans, synthDIR subtraction maps helped to identify a disease progression missed on FLAIR subtraction maps. CONCLUSIONS Generative adversarial networks can generate high-contrast DIR images that may improve the longitudinal follow-up assessment in MS patients compared with standard sequences. By detecting more newly formed MS lesions and increasing the rates of detected disease activity, our methodology promises to improve clinical decision-making.
Collapse
Affiliation(s)
- Sarah Schlaeger
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar
| | | | - Thomas Baum
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar
| | - Claus Zimmer
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar
| | | | | | - Mark Mühlau
- Department of Neurology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar
| | - Tom Finck
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar
| |
Collapse
|
29
|
Wang YR(J, Qu L, Sheybani ND, Luo X, Wang J, Hawk KE, Theruvath AJ, Gatidis S, Xiao X, Pribnow A, Rubin D, Daldrup-Link HE. AI Transformers for Radiation Dose Reduction in Serial Whole-Body PET Scans. Radiol Artif Intell 2023; 5:e220246. [PMID: 37293349 PMCID: PMC10245181 DOI: 10.1148/ryai.220246] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/30/2023] [Accepted: 04/12/2023] [Indexed: 06/10/2023]
Abstract
Purpose To develop a deep learning approach that enables ultra-low-dose, 1% of the standard clinical dosage (3 MBq/kg), ultrafast whole-body PET reconstruction in cancer imaging. Materials and Methods In this Health Insurance Portability and Accountability Act-compliant study, serial fluorine 18-labeled fluorodeoxyglucose PET/MRI scans of pediatric patients with lymphoma were retrospectively collected from two cross-continental medical centers between July 2015 and March 2020. Global similarity between baseline and follow-up scans was used to develop Masked-LMCTrans, a longitudinal multimodality coattentional convolutional neural network (CNN) transformer that provides interaction and joint reasoning between serial PET/MRI scans from the same patient. Image quality of the reconstructed ultra-low-dose PET was evaluated in comparison with a simulated standard 1% PET image. The performance of Masked-LMCTrans was compared with that of CNNs with pure convolution operations (classic U-Net family), and the effect of different CNN encoders on feature representation was assessed. Statistical differences in the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and visual information fidelity (VIF) were assessed by two-sample testing with the Wilcoxon signed rank t test. Results The study included 21 patients (mean age, 15 years ± 7 [SD]; 12 female) in the primary cohort and 10 patients (mean age, 13 years ± 4; six female) in the external test cohort. Masked-LMCTrans-reconstructed follow-up PET images demonstrated significantly less noise and more detailed structure compared with simulated 1% extremely ultra-low-dose PET images. SSIM, PSNR, and VIF were significantly higher for Masked-LMCTrans-reconstructed PET (P < .001), with improvements of 15.8%, 23.4%, and 186%, respectively. Conclusion Masked-LMCTrans achieved high image quality reconstruction of 1% low-dose whole-body PET images.Keywords: Pediatrics, PET, Convolutional Neural Network (CNN), Dose Reduction Supplemental material is available for this article. © RSNA, 2023.
Collapse
|
30
|
Ruffle JK, Mohinta S, Gray R, Hyare H, Nachev P. Brain tumour segmentation with incomplete imaging data. Brain Commun 2023; 5:fcad118. [PMID: 37124946 PMCID: PMC10144694 DOI: 10.1093/braincomms/fcad118] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/22/2023] [Accepted: 04/08/2023] [Indexed: 05/02/2023] Open
Abstract
Progress in neuro-oncology is increasingly recognized to be obstructed by the marked heterogeneity-genetic, pathological, and clinical-of brain tumours. If the treatment susceptibilities and outcomes of individual patients differ widely, determined by the interactions of many multimodal characteristics, then large-scale, fully-inclusive, richly phenotyped data-including imaging-will be needed to predict them at the individual level. Such data can realistically be acquired only in the routine clinical stream, where its quality is inevitably degraded by the constraints of real-world clinical care. Although contemporary machine learning could theoretically provide a solution to this task, especially in the domain of imaging, its ability to cope with realistic, incomplete, low-quality data is yet to be determined. In the largest and most comprehensive study of its kind, applying state-of-the-art brain tumour segmentation models to large scale, multi-site MRI data of 1251 individuals, here we quantify the comparative fidelity of automated segmentation models drawn from MR data replicating the various levels of completeness observed in real life. We demonstrate that models trained on incomplete data can segment lesions very well, often equivalently to those trained on the full completement of images, exhibiting Dice coefficients of 0.907 (single sequence) to 0.945 (complete set) for whole tumours and 0.701 (single sequence) to 0.891 (complete set) for component tissue types. This finding opens the door both to the application of segmentation models to large-scale historical data, for the purpose of building treatment and outcome predictive models, and their application to real-world clinical care. We further ascertain that segmentation models can accurately detect enhancing tumour in the absence of contrast-enhancing imaging, quantifying the burden of enhancing tumour with an R 2 > 0.97, varying negligibly with lesion morphology. Such models can quantify enhancing tumour without the administration of intravenous contrast, inviting a revision of the notion of tumour enhancement if the same information can be extracted without contrast-enhanced imaging. Our analysis includes validation on a heterogeneous, real-world 50 patient sample of brain tumour imaging acquired over the last 15 years at our tertiary centre, demonstrating maintained accuracy even on non-isotropic MRI acquisitions, or even on complex post-operative imaging with tumour recurrence. This work substantially extends the translational opportunity for quantitative analysis to clinical situations where the full complement of sequences is not available and potentially enables the characterization of contrast-enhanced regions where contrast administration is infeasible or undesirable.
Collapse
Affiliation(s)
- James K Ruffle
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Samia Mohinta
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Robert Gray
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Harpreet Hyare
- UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Parashkev Nachev
- UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
31
|
Wang YRJ, Wang P, Adams LC, Sheybani ND, Qu L, Sarrami AH, Theruvath AJ, Gatidis S, Ho T, Zhou Q, Pribnow A, Thakor AS, Rubin D, Daldrup-Link HE. Low-count whole-body PET/MRI restoration: an evaluation of dose reduction spectrum and five state-of-the-art artificial intelligence models. Eur J Nucl Med Mol Imaging 2023; 50:1337-1350. [PMID: 36633614 PMCID: PMC10387227 DOI: 10.1007/s00259-022-06097-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/24/2022] [Indexed: 01/13/2023]
Abstract
PURPOSE To provide a holistic and complete comparison of the five most advanced AI models in the augmentation of low-dose 18F-FDG PET data over the entire dose reduction spectrum. METHODS In this multicenter study, five AI models were investigated for restoring low-count whole-body PET/MRI, covering convolutional benchmarks - U-Net, enhanced deep super-resolution network (EDSR), generative adversarial network (GAN) - and the most cutting-edge image reconstruction transformer models in computer vision to date - Swin transformer image restoration network (SwinIR) and EDSR-ViT (vision transformer). The models were evaluated against six groups of count levels representing the simulated 75%, 50%, 25%, 12.5%, 6.25%, and 1% (extremely ultra-low-count) of the clinical standard 3 MBq/kg 18F-FDG dose. The comparisons were performed upon two independent cohorts - (1) a primary cohort from Stanford University and (2) a cross-continental external validation cohort from Tübingen University - in order to ensure the findings are generalizable. A total of 476 original count and simulated low-count whole-body PET/MRI scans were incorporated into this analysis. RESULTS For low-count PET restoration on the primary cohort, the mean structural similarity index (SSIM) scores for dose 6.25% were 0.898 (95% CI, 0.887-0.910) for EDSR, 0.893 (0.881-0.905) for EDSR-ViT, 0.873 (0.859-0.887) for GAN, 0.885 (0.873-0.898) for U-Net, and 0.910 (0.900-0.920) for SwinIR. In continuation, SwinIR and U-Net's performances were also discreetly evaluated at each simulated radiotracer dose levels. Using the primary Stanford cohort, the mean diagnostic image quality (DIQ; 5-point Likert scale) scores of SwinIR restoration were 5 (SD, 0) for dose 75%, 4.50 (0.535) for dose 50%, 3.75 (0.463) for dose 25%, 3.25 (0.463) for dose 12.5%, 4 (0.926) for dose 6.25%, and 2.5 (0.534) for dose 1%. CONCLUSION Compared to low-count PET images, with near-to or nondiagnostic images at higher dose reduction levels (up to 6.25%), both SwinIR and U-Net significantly improve the diagnostic quality of PET images. A radiotracer dose reduction to 1% of the current clinical standard radiotracer dose is out of scope for current AI techniques.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA.
| | - Pengcheng Wang
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Lisa Christine Adams
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Natasha Diba Sheybani
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
| | - Amir Hossein Sarrami
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Ashok Joseph Theruvath
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Tina Ho
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Quan Zhou
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94304, USA
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Heike E Daldrup-Link
- Department of Radiology, School of Medicine, Stanford University, 725 Welch Road, Stanford, CA, 94304, USA.
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
32
|
Wamelink IJHG, Hempel HL, van de Giessen E, Vries MHM, De Witt Hamer P, Barkhof F, Keil VC. The patients' experience of neuroimaging of primary brain tumors: a cross-sectional survey study. J Neurooncol 2023; 162:307-315. [PMID: 36977844 PMCID: PMC10167184 DOI: 10.1007/s11060-023-04290-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 03/04/2023] [Indexed: 03/30/2023]
Abstract
PURPOSE To gain insight into how patients with primary brain tumors experience MRI, follow-up protocols, and gadolinium-based contrast agent (GBCA) use. METHODS Primary brain tumor patients answered a survey after their MRI exam. Questions were analyzed to determine trends in patients' experience regarding the scan itself, follow-up frequency, and the use of GBCAs. Subgroup analysis was performed on sex, lesion grade, age, and the number of scans. Subgroup comparison was made using the Pearson chi-square test and the Mann-Whitney U-test for categorical and ordinal questions, respectively. RESULTS Of the 100 patients, 93 had a histopathologically confirmed diagnosis, and seven were considered to have a slow-growing low-grade tumor after multidisciplinary assessment and follow-up. 61/100 patients were male, with a mean age ± standard deviation of 44 ± 14 years and 46 ± 13 years for the females. Fifty-nine patients had low-grade tumors. Patients consistently underestimated the number of their previous scans. 92% of primary brain tumor patients did not experience the MRI as bothering and 78% would not change the number of follow-up MRIs. 63% of the patients would prefer GBCA-free MRI scans if diagnostically equally accurate. Women found the MRI and receiving intravenous cannulas significantly more uncomfortable than men (p = 0.003). Age, diagnosis, and the number of previous scans had no relevant impact on the patient experience. CONCLUSION Patients with primary brain tumors experienced current neuro-oncological MRI practice as positive. Especially women would, however, prefer GBCA-free imaging if diagnostically equally accurate. Patient knowledge of GBCAs was limited, indicating improvable patient information.
Collapse
Affiliation(s)
- Ivar J H G Wamelink
- Radiology & Nuclear Medicine Department, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands.
- Cancer Center Amsterdam, Brain Tumor Center Amsterdam, Amsterdam, The Netherlands.
| | - Hugo L Hempel
- Radiology & Nuclear Medicine Department, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
| | - Elsmarieke van de Giessen
- Radiology & Nuclear Medicine Department, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Brain Imaging, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Mark H M Vries
- Radiology & Nuclear Medicine Department, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
| | - Philip De Witt Hamer
- Cancer Center Amsterdam, Brain Tumor Center Amsterdam, Amsterdam, The Netherlands
- Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, Amsterdam, The Netherlands
| | - Frederik Barkhof
- Radiology & Nuclear Medicine Department, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Queen Square Institute of Neurology and Centre for Medical Image Computing, University College London, London, UK
| | - Vera C Keil
- Radiology & Nuclear Medicine Department, Amsterdam UMC Location Vrije Universiteit Amsterdam, De Boelelaan 1117, 1081 HV, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Brain Tumor Center Amsterdam, Amsterdam, The Netherlands
- Amsterdam Neuroscience, Brain Imaging, De Boelelaan 1117, Amsterdam, The Netherlands
| |
Collapse
|
33
|
Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91:110-123. [PMID: 36907387 DOI: 10.1016/j.semcancer.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 03/12/2023]
Abstract
Glioma represents a dominant primary intracranial malignancy in the central nervous system. Artificial intelligence that mainly includes machine learning, and deep learning computational approaches, presents a unique opportunity to enhance clinical management of glioma through improving tumor segmentation, diagnosis, differentiation, grading, treatment, prediction of clinical outcomes (prognosis, and recurrence), molecular features, clinical classification, characterization of the tumor microenvironment, and drug discovery. A growing body of recent studies apply artificial intelligence-based models to disparate data sources of glioma, covering imaging modalities, digital pathology, high-throughput multi-omics data (especially emerging single-cell RNA sequencing and spatial transcriptome), etc. While these early findings are promising, future studies are required to normalize artificial intelligence-based models to improve the generalizability and interpretability of the results. Despite prominent issues, targeted clinical application of artificial intelligence approaches in glioma will facilitate the development of precision medicine of this field. If these challenges can be overcome, artificial intelligence has the potential to profoundly change the way patients with or at risk of glioma are provided with more rational care.
Collapse
Affiliation(s)
- Jiefeng Luo
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Mika Pan
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Ke Mo
- Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Yingwei Mao
- Department of Biology, Pennsylvania State University, University Park, PA 16802, USA.
| | - Donghua Zou
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China; Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China.
| |
Collapse
|
34
|
Moya-Sáez E, de Luis-García R, Alberola-López C. Toward deep learning replacement of gadolinium in neuro-oncology: A review of contrast-enhanced synthetic MRI. FRONTIERS IN NEUROIMAGING 2023; 2:1055463. [PMID: 37554645 PMCID: PMC10406200 DOI: 10.3389/fnimg.2023.1055463] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 01/04/2023] [Indexed: 08/10/2023]
Abstract
Gadolinium-based contrast agents (GBCAs) have become a crucial part of MRI acquisitions in neuro-oncology for the detection, characterization and monitoring of brain tumors. However, contrast-enhanced (CE) acquisitions not only raise safety concerns, but also lead to patient discomfort, the need of more skilled manpower and cost increase. Recently, several proposed deep learning works intend to reduce, or even eliminate, the need of GBCAs. This study reviews the published works related to the synthesis of CE images from low-dose and/or their native -non CE- counterparts. The data, type of neural network, and number of input modalities for each method are summarized as well as the evaluation methods. Based on this analysis, we discuss the main issues that these methods need to overcome in order to become suitable for their clinical usage. We also hypothesize some future trends that research on this topic may follow.
Collapse
Affiliation(s)
- Elisa Moya-Sáez
- Laboratorio de Procesado de Imagen, ETSI Telecomunicación, Universidad de Valladolid, Valladolid, Spain
| | | | | |
Collapse
|
35
|
Carboxymethyl chitosan-assisted MnOx nanoparticles: Synthesis, characterization, detection and cartilage repair in early osteoarthritis. Carbohydr Polym 2022; 294:119821. [DOI: 10.1016/j.carbpol.2022.119821] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/27/2022] [Accepted: 06/29/2022] [Indexed: 12/24/2022]
|
36
|
Xue Y, Dewey BE, Zuo L, Han S, Carass A, Duan P, Remedios SW, Pham DL, Saidha S, Calabresi PA, Prince JL. Bi-directional Synthesis of Pre- and Post-contrast MRI via Guided Feature Disentanglement. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING : ... INTERNATIONAL WORKSHOP, SASHIMI ..., HELD IN CONJUNCTION WITH MICCAI ..., PROCEEDINGS. SASHIMI (WORKSHOP) 2022; 13570:55-65. [PMID: 36326241 PMCID: PMC9623769 DOI: 10.1007/978-3-031-16980-9_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Magnetic resonance imaging (MRI) with gadolinium contrast is widely used for tissue enhancement and better identification of active lesions and tumors. Recent studies have shown that gadolinium deposition can accumulate in tissues including the brain, which raises safety concerns. Prior works have tried to synthesize post-contrast T1-weighted MRIs from pre-contrast MRIs to avoid the use of gadolinium. However, contrast and image representations are often entangled during the synthesis process, resulting in synthetic post-contrast MRIs with undesirable contrast enhancements. Moreover, the synthesis of pre-contrast MRIs from post-contrast MRIs which can be useful for volumetric analysis is rarely investigated in the literature. To tackle pre- and post- contrast MRI synthesis, we propose a BI-directional Contrast Enhancement Prediction and Synthesis (BICEPS) network that enables disentanglement of contrast and image representations via a bi-directional image-to-image translation(I2I)model. Our proposed model can perform both pre-to-post and post-to-pre contrast synthesis, and provides an interpretable synthesis process by predicting contrast enhancement maps from the learned contrast embedding. Extensive experiments on a multiple sclerosis dataset demonstrate the feasibility of applying our bidirectional synthesis and show that BICEPS outperforms current methods.
Collapse
Affiliation(s)
- Yuan Xue
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Blake E Dewey
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Lianrui Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 20892, USA
| | - Shuo Han
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Peiyu Duan
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Samuel W Remedios
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation, Bethesda, MD 20817, USA
| | - Shiv Saidha
- Department of Neurology, Johns Hopkins School of Medicine,Baltimore, MD 21287, USA
| | - Peter A Calabresi
- Department of Neurology, Johns Hopkins School of Medicine,Baltimore, MD 21287, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
37
|
di Noia C, Grist JT, Riemer F, Lyasheva M, Fabozzi M, Castelli M, Lodi R, Tonon C, Rundo L, Zaccagna F. Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics (Basel) 2022; 12:diagnostics12092125. [PMID: 36140526 PMCID: PMC9497964 DOI: 10.3390/diagnostics12092125] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/05/2022] [Accepted: 08/17/2022] [Indexed: 11/24/2022] Open
Abstract
Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.
Collapse
Affiliation(s)
- Christian di Noia
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Frank Riemer
- Mohn Medical Imaging and Visualization Centre (MMIV), Department of Radiology, Haukeland University Hospital, N-5021 Bergen, Norway
| | - Maria Lyasheva
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK
| | - Miriana Fabozzi
- Centro Medico Polispecialistico (CMO), 80058 Torre Annunziata, Italy
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
- Correspondence: ; Tel.: +39-0514969951
| |
Collapse
|
38
|
Liu C, Zhu N, Sun H, Zhang J, Feng X, Gjerswold-Selleck S, Sikka D, Zhu X, Liu X, Nuriel T, Wei HJ, Wu CC, Vaughan JT, Laine AF, Provenzano FA, Small SA, Guo J. Deep learning of MRI contrast enhancement for mapping cerebral blood volume from single-modal non-contrast scans of aging and Alzheimer's disease brains. Front Aging Neurosci 2022; 14:923673. [PMID: 36034139 PMCID: PMC9407020 DOI: 10.3389/fnagi.2022.923673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 07/18/2022] [Indexed: 11/13/2022] Open
Abstract
While MRI contrast agents such as those based on Gadolinium are needed for high-resolution mapping of brain metabolism, these contrast agents require intravenous administration, and there are rising concerns over their safety and invasiveness. Furthermore, non-contrast MRI scans are more commonly performed than those with contrast agents and are readily available for analysis in public databases such as the Alzheimer's Disease Neuroimaging Initiative (ADNI). In this article, we hypothesize that a deep learning model, trained using quantitative steady-state contrast-enhanced structural MRI datasets, in mice and humans, can generate contrast-equivalent information from a single non-contrast MRI scan. The model was first trained, optimized, and validated in mice, and was then transferred and adapted to humans. We observe that the model can substitute for Gadolinium-based contrast agents in approximating cerebral blood volume, a quantitative representation of brain activity, at sub-millimeter granularity. Furthermore, we validate the use of our deep-learned prediction maps to identify functional abnormalities in the aging brain using locally obtained MRI scans, and in the brain of patients with Alzheimer's disease using publicly available MRI scans from ADNI. Since it is derived from a commonly-acquired MRI protocol, this framework has the potential for broad clinical utility and can also be applied retrospectively to research scans across a host of neurological/functional diseases.
Collapse
Affiliation(s)
- Chen Liu
- Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Nanyan Zhu
- Department of Biological Sciences, Columbia University, New York, NY, United States
| | - Haoran Sun
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Junhao Zhang
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Xinyang Feng
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | | | - Dipika Sikka
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Xuemin Zhu
- Department of Pathology and Cell Biology, Columbia University, New York, NY, United States
| | - Xueqing Liu
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Tal Nuriel
- Department of Radiation Oncology, Columbia University, New York, NY, United States
| | - Hong-Jian Wei
- Department of Radiation Oncology, Columbia University, New York, NY, United States
| | - Cheng-Chia Wu
- Department of Radiation Oncology, Columbia University, New York, NY, United States
| | - J. Thomas Vaughan
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Andrew F. Laine
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | | | - Scott A. Small
- Department of Neurology, Columbia University, New York, NY, United States
- Department of Psychiatry, Columbia University, New York, NY, United States
- Taub Institute for Research on Alzheimer's Disease and the Aging Brain, Columbia University, New York, NY, United States
| | - Jia Guo
- Department of Psychiatry, Columbia University, New York, NY, United States
- The Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
- *Correspondence: Jia Guo
| |
Collapse
|
39
|
Machine learning in neuro-oncology: toward novel development fields. J Neurooncol 2022; 159:333-346. [PMID: 35761160 DOI: 10.1007/s11060-022-04068-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 06/11/2022] [Indexed: 10/17/2022]
Abstract
PURPOSE Artificial Intelligence (AI) involves several and different techniques able to elaborate a large amount of data responding to a specific planned outcome. There are several possible applications of this technology in neuro-oncology. METHODS We reviewed, according to PRISMA guidelines, available studies adopting AI in different fields of neuro-oncology including neuro-radiology, pathology, surgery, radiation therapy, and systemic treatments. RESULTS Neuro-radiology presented the major number of studies assessing AI. However, this technology is being successfully tested also in other operative settings including surgery and radiation therapy. In this context, AI shows to significantly reduce resources and costs maintaining an elevated qualitative standard. Pathological diagnosis and development of novel systemic treatments are other two fields in which AI showed promising preliminary data. CONCLUSION It is likely that AI will be quickly included in some aspects of daily clinical practice. Possible applications of these techniques are impressive and cover all aspects of neuro-oncology.
Collapse
|
40
|
Liu X, Yoo C, Xing F, Kuo CCJ, El Fakhri G, Kang JW, Woo J. Unsupervised Black-Box Model Domain Adaptation for Brain Tumor Segmentation. Front Neurosci 2022; 16:837646. [PMID: 35720708 PMCID: PMC9201342 DOI: 10.3389/fnins.2022.837646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/22/2022] [Indexed: 02/03/2023] Open
Abstract
Unsupervised domain adaptation (UDA) is an emerging technique that enables the transfer of domain knowledge learned from a labeled source domain to unlabeled target domains, providing a way of coping with the difficulty of labeling in new domains. The majority of prior work has relied on both source and target domain data for adaptation. However, because of privacy concerns about potential leaks in sensitive information contained in patient data, it is often challenging to share the data and labels in the source domain and trained model parameters in cross-center collaborations. To address this issue, we propose a practical framework for UDA with a black-box segmentation model trained in the source domain only, without relying on source data or a white-box source model in which the network parameters are accessible. In particular, we propose a knowledge distillation scheme to gradually learn target-specific representations. Additionally, we regularize the confidence of the labels in the target domain via unsupervised entropy minimization, leading to performance gain over UDA without entropy minimization. We extensively validated our framework on a few datasets and deep learning backbones, demonstrating the potential for our framework to be applied in challenging yet realistic clinical settings.
Collapse
Affiliation(s)
- Xiaofeng Liu
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Chaehwa Yoo
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
- Department of Electronic and Electrical Engineering and Graduate Program in Smart Factory, Ewha Womans University, Seoul, South Korea
| | - Fangxu Xing
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - C.-C. Jay Kuo
- Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, United States
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Je-Won Kang
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
- Department of Electronic and Electrical Engineering and Graduate Program in Smart Factory, Ewha Womans University, Seoul, South Korea
| | - Jonghye Woo
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| |
Collapse
|
41
|
Park JE. Artificial Intelligence in Neuro-Oncologic Imaging: A Brief Review for Clinical Use Cases and Future Perspectives. Brain Tumor Res Treat 2022; 10:69-75. [PMID: 35545825 PMCID: PMC9098975 DOI: 10.14791/btrt.2021.0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 03/24/2022] [Accepted: 04/18/2022] [Indexed: 11/20/2022] Open
Abstract
The artificial intelligence (AI) techniques, both deep learning end-to-end approaches and radiomics with machine learning, have been developed for various imaging-based tasks in neuro-oncology. In this brief review, use cases of AI in neuro-oncologic imaging are summarized: image quality improvement, metastasis detection, radiogenomics, and treatment response monitoring. We then give a brief overview of generative adversarial network and potential utility of synthetic images for various deep learning algorithms of imaging-based tasks and image translation tasks as becoming new data input. Lastly, we highlight the importance of cohorts and clinical trial as a true validation for clinical utility of AI in neuro-oncologic imaging.
Collapse
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
42
|
A new magnetic resonance imaging tumour response grading scheme for locally advanced rectal cancer. Br J Cancer 2022; 127:268-277. [PMID: 35388140 PMCID: PMC9296509 DOI: 10.1038/s41416-022-01801-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 03/14/2022] [Accepted: 03/21/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND The potential of using magnetic resonance image tumour-regression grading (MRI-TRG) system to predict pathological TRG is debatable for locally advanced rectal cancer treated by neoadjuvant radiochemotherapy. METHODS Referring to the American Joint Committee on Cancer/College of American Pathologists (AJCC/CAP) TRG classification scheme, a new four-category MRI-TRG system based on the volumetric analysis of the residual tumour and radiochemotherapy induced anorectal fibrosis was established. The agreement between them was evaluated by Kendall's tau-b test, while Kaplan-Meier analysis was used to calculate survival outcomes. RESULTS In total, 1033 patients were included. Good agreement between MRI-TRG and AJCC/CAP TRG classifications was observed (k = 0.671). Particularly, as compared with other pairs, MRI-TRG 0 displayed the highest sensitivity [90.1% (95% CI: 84.3-93.9)] and specificity [92.8% (95% CI: 90.4-94.7)] in identifying AJCC/CAP TRG 0 category patients. Except for the survival ratios that were comparable between the MRI-TRG 0 and MRI-TRG 1 categories, any two of the four categories had distinguished 3-year prognosis (all P < 0.05). Cox regression analysis further proved that the MRI-TRG system was an independent prognostic factor (all P < 0.05). CONCLUSION The new MRI-TRG system might be a surrogate for AJCC/CAP TRG classification scheme. Importantly, the system is a reliable and non-invasive way to identify patients with complete pathological responses.
Collapse
|
43
|
Osman AFI, Tamam NM. Deep learning-based convolutional neural network for intramodality brain MRI synthesis. J Appl Clin Med Phys 2022; 23:e13530. [PMID: 35044073 PMCID: PMC8992958 DOI: 10.1002/acm2.13530] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 12/22/2021] [Accepted: 12/25/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state-of-the-art deep learning convolutional neural network (CNN) for image-to-image translation across three standards MRI contrasts for the brain. METHODS BRATS'2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1-weighted (T1), T2-weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U-Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean-squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic-MR images were evaluated against the ground-truth images by computing the MSE, mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS The generated synthetic-MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44-33.25 dB, 0.0005-0.0012, 0.0086-0.0149, and 0.932-0.946, respectively. Our results were as good as the best-reported results by other deep learning models on BRATS datasets. CONCLUSIONS Our U-Net model exhibited that it can accurately perform image-to-image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision-making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.
Collapse
Affiliation(s)
- Alexander F I Osman
- Department of Medical Physics, Al-Neelain University, Khartoum, 11121, Sudan
| | - Nissren M Tamam
- Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|
44
|
Beyond the AJR: Deep Learning Enabled Virtual Post-Contrast Imaging for Neuro-oncology. AJR Am J Roentgenol 2022; 219:351. [DOI: 10.2214/ajr.22.27364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
45
|
Park JE, Vollmuth P, Kim N, Kim HS. Research Highlight: Use of Generative Images Created with Artificial Intelligence for Brain Tumor Imaging. Korean J Radiol 2022; 23:500-504. [PMID: 35434978 PMCID: PMC9081688 DOI: 10.3348/kjr.2022.0033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 02/13/2022] [Accepted: 02/15/2022] [Indexed: 11/29/2022] Open
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Philipp Vollmuth
- Department of Neuroradiology, University of Heidelberg, Heidelberg, Germany
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
46
|
Pflüger I, Wald T, Isensee F, Schell M, Meredig H, Schlamp K, Bernhardt D, Brugnara G, Heußel CP, Debus J, Wick W, Bendszus M, Maier-Hein KH, Vollmuth P. Automated detection and quantification of brain metastases on clinical MRI data using artificial neural networks. Neurooncol Adv 2022; 4:vdac138. [PMID: 36105388 PMCID: PMC9466273 DOI: 10.1093/noajnl/vdac138] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Background
Reliable detection and precise volumetric quantification of brain metastases (BM) on MRI are essential for guiding treatment decisions. Here we evaluate the potential of artificial neural networks (ANN) for automated detection and quantification of BM.
Methods
A consecutive series of 308 patients with BM was used for developing an ANN (with a 4:1 split for training/testing) for automated volumetric assessment of contrast-enhancing tumors (CE) and non-enhancing FLAIR signal abnormality including edema (NEE). An independent consecutive series of 30 patients was used for external testing. Performance was assessed case-wise for CE and NEE and lesion-wise for CE using the case-wise/lesion-wise DICE-coefficient (C/L-DICE), positive predictive value (L-PPV) and sensitivity (C/L-Sensitivity).
Results
The performance of detecting CE lesions on the validation dataset was not significantly affected when evaluating different volumetric thresholds (0.001–0.2 cm3; P = .2028). The median L-DICE and median C-DICE for CE lesions were 0.78 (IQR = 0.6–0.91) and 0.90 (IQR = 0.85–0.94) in the institutional as well as 0.79 (IQR = 0.67–0.82) and 0.84 (IQR = 0.76–0.89) in the external test dataset. The corresponding median L-Sensitivity and median L-PPV were 0.81 (IQR = 0.63–0.92) and 0.79 (IQR = 0.63–0.93) in the institutional test dataset, as compared to 0.85 (IQR = 0.76–0.94) and 0.76 (IQR = 0.68–0.88) in the external test dataset. The median C-DICE for NEE was 0.96 (IQR = 0.92–0.97) in the institutional test dataset as compared to 0.85 (IQR = 0.72–0.91) in the external test dataset.
Conclusion
The developed ANN-based algorithm (publicly available at www.github.com/NeuroAI-HD/HD-BM) allows reliable detection and precise volumetric quantification of CE and NEE compartments in patients with BM.
Collapse
Affiliation(s)
- Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Tassilo Wald
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Hagen Meredig
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Kai Schlamp
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University Munich , Munich , Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Claus Peter Heußel
- Department of Diagnostic and Interventional Radiology with Nuclear Medicine, Clinic for Thoracic Diseases (Thoraxklinik), Heidelberg University Hospital , Heidelberg , Germany
- Member of the Cerman Center for Lung Research (DZL), Translational Lung Research Center (TLRC) , Heidelberg , Germany
| | - Juergen Debus
- Department of Radiation Oncology, Heidelberg University Hospital , Heidelberg , Germany
- Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg University Hospital , Heidelberg , Germany
- German Cancer Consotium (DKTK), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ) , Heidelberg , Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Wolfgang Wick
- Neurology Clinic, Heidelberg University Hospital , Heidelberg , Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| | - Klaus H Maier-Hein
- Medical Image Computing, German Cancer Research Center (DKFZ) , Heidelberg , Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital , Heidelberg , Germany
| |
Collapse
|