1
|
Meng M, Gu B, Fulham M, Song S, Feng D, Bi L, Kim J. Adaptive segmentation-to-survival learning for survival prediction from multi-modality medical images. NPJ Precis Oncol 2024; 8:232. [PMID: 39402129 PMCID: PMC11473954 DOI: 10.1038/s41698-024-00690-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 08/28/2024] [Indexed: 10/17/2024] Open
Abstract
Early survival prediction is vital for the clinical management of cancer patients, as tumors can be better controlled with personalized treatment planning. Traditional survival prediction methods are based on radiomics feature engineering and/or clinical indicators (e.g., cancer staging). Recently, survival prediction models with advances in deep learning techniques have achieved state-of-the-art performance in end-to-end survival prediction by exploiting deep features derived from medical images. However, existing models are heavily reliant on the prognostic information within primary tumors and cannot effectively leverage out-of-tumor prognostic information characterizing local tumor metastasis and adjacent tissue invasion. Also, existing models are sub-optimal in leveraging multi-modality medical images as they rely on empirically designed fusion strategies to integrate multi-modality information, where the fusion strategies are pre-defined based on domain-specific human prior knowledge and inherently limited in adaptability. Here, we present an Adaptive Multi-modality Segmentation-to-Survival model (AdaMSS) for survival prediction from multi-modality medical images. The AdaMSS can self-adapt its fusion strategy based on training data and also can adapt its focus regions to capture the prognostic information outside the primary tumors. Extensive experiments with two large cancer datasets (1380 patients from nine medical centers) show that our AdaMSS surmounts the state-of-the-art survival prediction performance (C-index: 0.804 and 0.757), demonstrating the potential to facilitate personalized treatment planning.
Collapse
Affiliation(s)
- Mingyuan Meng
- School of Computer Science, The University of Sydney, Sydney, Australia
- Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Bingxin Gu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Center for Biomedical Imaging, Fudan University, Shanghai, China
- Shanghai Engineering Research Center of Molecular Imaging Probes, Shanghai, China
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, Shanghai, China
| | - Michael Fulham
- School of Computer Science, The University of Sydney, Sydney, Australia
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.
- Center for Biomedical Imaging, Fudan University, Shanghai, China.
- Shanghai Engineering Research Center of Molecular Imaging Probes, Shanghai, China.
- Key Laboratory of Nuclear Physics and Ion-Beam Application (MOE), Fudan University, Shanghai, China.
| | - Dagan Feng
- School of Computer Science, The University of Sydney, Sydney, Australia
| | - Lei Bi
- School of Computer Science, The University of Sydney, Sydney, Australia.
- Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Jinman Kim
- School of Computer Science, The University of Sydney, Sydney, Australia.
| |
Collapse
|
2
|
Oliver J, Alapati R, Lee J, Bur A. Artificial Intelligence in Head and Neck Surgery. Otolaryngol Clin North Am 2024; 57:803-820. [PMID: 38910064 PMCID: PMC11374486 DOI: 10.1016/j.otc.2024.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
This article explores artificial intelligence's (AI's) role in otolaryngology for head and neck cancer diagnosis and management. It highlights AI's potential in pattern recognition for early cancer detection, prognostication, and treatment planning, primarily through image analysis using clinical, endoscopic, and histopathologic images. Radiomics is also discussed at length, as well as the many ways that radiologic image analysis can be utilized, including for diagnosis, lymph node metastasis prediction, and evaluation of treatment response. The study highlights AI's promise and limitations, underlining the need for clinician-data scientist collaboration to enhance head and neck cancer care.
Collapse
Affiliation(s)
- Jamie Oliver
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Rahul Alapati
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Jason Lee
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA
| | - Andrés Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas School of Medicine, 3901 Rainbow Boulevard M.S. 3010, Kansas City, KS, USA.
| |
Collapse
|
3
|
Huang J, Li X, Tan H, Cheng X. Generative Adversarial Network for Trimodal Medical Image Fusion Using Primitive Relationship Reasoning. IEEE J Biomed Health Inform 2024; 28:5729-5741. [PMID: 39093669 DOI: 10.1109/jbhi.2024.3426664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
Medical image fusion has become a hot biomedical image processing technology in recent years. The technology coalesces useful information from different modal medical images onto an informative single fused image to provide reasonable and effective medical assistance. Currently, research has mainly focused on dual-modal medical image fusion, and little attention has been paid on trimodal medical image fusion, which has greater application requirements and clinical significance. For this, the study proposes an end-to-end generative adversarial network for trimodal medical image fusion. Utilizing a multi-scale squeeze and excitation reasoning attention network, the proposed method generates an energy map for each source image, facilitating efficient trimodal medical image fusion under the guidance of an energy ratio fusion strategy. To obtain the global semantic information, we introduced squeeze and excitation reasoning attention blocks and enhanced the global feature by primitive relationship reasoning. Through extensive fusion experiments, we demonstrate that our method yields superior visual results and objective evaluation metric scores compared to state-of-the-art fusion methods. Furthermore, the proposed method also obtained the best accuracy in the glioma segmentation experiment.
Collapse
|
4
|
Xu Y, Wang J, Li C, Su Y, Peng H, Guo L, Lin S, Li J, Wu D. Advancing precise diagnosis of nasopharyngeal carcinoma through endoscopy-based radiomics analysis. iScience 2024; 27:110590. [PMID: 39252978 PMCID: PMC11381885 DOI: 10.1016/j.isci.2024.110590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/25/2024] [Accepted: 07/23/2024] [Indexed: 09/11/2024] Open
Abstract
Nasopharyngeal carcinoma (NPC) has high metastatic potential and is hard to detect early. This study aims to develop a deep learning model for NPC diagnosis using optical imagery. From April 2008 to May 2021, we analyzed 12,087 nasopharyngeal endoscopic images and 309 videos from 1,108 patients. The pretrained model was fine-tuned with stochastic gradient descent on the final layers. Data augmentation was applied during training. Videos were converted to images for malignancy scoring. Performance metrics like AUC, accuracy, and sensitivity were calculated based on the malignancy score. The deep learning model demonstrated high performance in identifying NPC, with AUC values of 0.981 (95% confidence of interval [CI] 0.965-0.996) for the Fujian Cancer Hospital dataset and 0.937 (0.905-0.970) for the Jiangxi Cancer Hospital dataset. The proposed model effectively diagnoses NPC with high accuracy, sensitivity, and specificity across multiple datasets. It shows promise for early NPC detection, especially in identifying latent lesions.
Collapse
Affiliation(s)
- Yun Xu
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
- Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian, China
| | - Jiesong Wang
- Department of Lymphoma & Head and Neck Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
| | - Chenxin Li
- Department of Electrical Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yong Su
- Department of Radiation Oncology, Jiangxi Cancer Hospital, Jiangxi, China
- National Health Commission (NHC) Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma (Jiangxi Cancer Hospital of Nanchang University), Nanchang, China
| | - Hewei Peng
- Department of Epidemiology and Health Statistics, Fujian Provincial Key Laboratory of Environment Factors and Cancer, School of Public Health, Fujian Medical University, Fuzhou, China
| | - Lanyan Guo
- School of Medical Imaging, Fujian Medical University, Fuzhou, China
| | - Shaojun Lin
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
- Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian, China
| | - Jingao Li
- Department of Radiation Oncology, Jiangxi Cancer Hospital, Jiangxi, China
- National Health Commission (NHC) Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma (Jiangxi Cancer Hospital of Nanchang University), Nanchang, China
| | - Dan Wu
- Tianjin Key Laboratory of Human Development and Reproductive Regulation, Tianjin Central Hospital of Gynecology Obstetrics and Nankai University Affiliated Hospital of Obstetrics and Gynecology, Tianjin, China
- Tianjin Cancer Institute, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, Tianjin Medical University, Tianjin, China
| |
Collapse
|
5
|
Li Y, Chen Q, Li H, Wang S, Chen N, Han T, Wang K, Yu Q, Cao Z, Tang J. MFNet: Meta-learning based on frequency-space mix for MRI segmentation in nasopharyngeal carcinoma. J Cell Mol Med 2024; 28:e18355. [PMID: 38685683 PMCID: PMC11058331 DOI: 10.1111/jcmm.18355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 04/07/2024] [Accepted: 04/11/2024] [Indexed: 05/02/2024] Open
Abstract
Deep learning techniques have been applied to medical image segmentation and demonstrated expert-level performance. Due to the poor generalization abilities of the models in the deployment in different centres, common solutions, such as transfer learning and domain adaptation techniques, have been proposed to mitigate this issue. However, these solutions necessitate retraining the models with target domain data and annotations, which limits their deployment in clinical settings in unseen domains. We evaluated the performance of domain generalization methods on the task of MRI segmentation of nasopharyngeal carcinoma (NPC) by collecting a new dataset of 321 patients with manually annotated MRIs from two hospitals. We transformed the modalities of MRI, including T1WI, T2WI and CE-T1WI, from the spatial domain to the frequency domain using Fourier transform. To address the bottleneck of domain generalization in MRI segmentation of NPC, we propose a meta-learning approach based on frequency domain feature mixing. We evaluated the performance of MFNet against existing techniques for generalizing NPC segmentation in terms of Dice and MIoU. Our method evidently outperforms the baseline in handling the generalization of NPC segmentation. The MF-Net clearly demonstrates its effectiveness for generalizing NPC MRI segmentation to unseen domains (Dice = 67.59%, MIoU = 75.74% T1W1). MFNet enhances the model's generalization capabilities by incorporating mixed-feature meta-learning. Our approach offers a novel perspective to tackle the domain generalization problem in the field of medical imaging by effectively exploiting the unique characteristics of medical images.
Collapse
Affiliation(s)
- Yin Li
- Department of OtorhinolaryngologyThe First People's Hospital of FoshanFoshanChina
| | - Qi Chen
- Department of RadiologyThe Second Affiliated Hospital of Anhui Medical UniversityHefeiChina
| | - Hao Li
- Department of Infectious Diseases, The First People's Hospital of Changde City, Xiangya School of MedicineCentral South UniversityChangdeChina
| | - Song Wang
- University of Electronic Science and Technology of ChinaChengduChina
| | - Nutan Chen
- Machine Learning Research Lab, Volkswagen GroupMunichGermany
| | - Ting Han
- Department of RadiologyThe First People's Hospital of FoshanFoshanChina
| | - Kai Wang
- Department of OtorhinolaryngologyThe First People's Hospital of FoshanFoshanChina
| | - Qingqing Yu
- Department of OtorhinolaryngologyThe First People's Hospital of FoshanFoshanChina
| | - Zhantao Cao
- Department of ResearchCETC Cyberspace Security Technology CO., LTD.ChengduChina
| | - Jun Tang
- Department of OtorhinolaryngologyThe First People's Hospital of FoshanFoshanChina
| |
Collapse
|
6
|
Vo VTT, Shin TH, Yang HJ, Kang SR, Kim SH. A comparison between centralized and asynchronous federated learning approaches for survival outcome prediction using clinical and PET data from non-small cell lung cancer patients. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108104. [PMID: 38457959 DOI: 10.1016/j.cmpb.2024.108104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/10/2024]
Abstract
BACKGROUND AND OBJECTIVE Survival analysis plays an essential role in the medical field for optimal treatment decision-making. Recently, survival analysis based on the deep learning (DL) approach has been proposed and is demonstrating promising results. However, developing an ideal prediction model requires integrating large datasets across multiple institutions, which poses challenges concerning medical data privacy. METHODS In this paper, we propose FedSurv, an asynchronous federated learning (FL) framework designed to predict survival time using clinical information and positron emission tomography (PET)-based features. This study used two datasets: a public radiogenic dataset of non-small cell lung cancer (NSCLC) from the Cancer Imaging Archive (RNSCLC), and an in-house dataset from the Chonnam National University Hwasun Hospital (CNUHH) in South Korea, consisting of clinical risk factors and F-18 fluorodeoxyglucose (FDG) PET images in NSCLC patients. Initially, each dataset was divided into multiple clients according to histological attributes, and each client was trained using the proposed DL model to predict individual survival time. The FL framework collected weights and parameters from the clients, which were then incorporated into the global model. Finally, the global model aggregated all weights and parameters and redistributed the updated model weights to each client. We evaluated different frameworks including single-client-based approach, centralized learning and FL. RESULTS We evaluated our method on two independent datasets. First, on the RNSCLC dataset, the mean absolute error (MAE) was 490.80±22.95 d and the C-Index was 0.69±0.01. Second, on the CNUHH dataset, the MAE was 494.25±40.16 d and the C-Index was 0.71±0.01. The FL approach achieved centralized method performance in PET-based survival time prediction and outperformed single-client-based approaches. CONCLUSIONS Our results demonstrated the feasibility and effectiveness of employing FL for individual survival prediction in NSCLC patients, using clinical information and PET-based features.
Collapse
Affiliation(s)
- Vi Thi-Tuong Vo
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, South Korea
| | - Tae-Ho Shin
- Interdisciplinary Program of Information Security, Chonnam National University, Gwangju, 61186, South Korea
| | - Hyung-Jeong Yang
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, South Korea
| | - Sae-Ryung Kang
- Department of Nuclear Medicine, Chonnam National University Hwasun Hospital and Medical School, Hwasun, 58128, South Korea.
| | - Soo-Hyung Kim
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, South Korea.
| |
Collapse
|
7
|
Wang CK, Wang TW, Lu CF, Wu YT, Hua MW. Deciphering the Prognostic Efficacy of MRI Radiomics in Nasopharyngeal Carcinoma: A Comprehensive Meta-Analysis. Diagnostics (Basel) 2024; 14:924. [PMID: 38732337 PMCID: PMC11082984 DOI: 10.3390/diagnostics14090924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 04/12/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024] Open
Abstract
This meta-analysis investigates the prognostic value of MRI-based radiomics in nasopharyngeal carcinoma treatment outcomes, specifically focusing on overall survival (OS) variability. The study protocol was registered with INPLASY (INPLASY202420101). Initially, a systematic review identified 15 relevant studies involving 6243 patients through a comprehensive search across PubMed, Embase, and Web of Science, adhering to PRISMA guidelines. The methodological quality was assessed using the Quality in Prognosis Studies (QUIPS) tool and the Radiomics Quality Score (RQS), highlighting a low risk of bias in most domains. Our analysis revealed a significant average concordance index (c-index) of 72% across studies, indicating the potential of radiomics in clinical prognostication. However, moderate heterogeneity was observed, particularly in OS predictions. Subgroup analyses and meta-regression identified validation methods and radiomics software as significant heterogeneity moderators. Notably, the number of features in the prognosis model correlated positively with its performance. These findings suggest radiomics' promising role in enhancing cancer treatment strategies, though the observed heterogeneity and potential biases call for cautious interpretation and standardization in future research.
Collapse
Affiliation(s)
- Chih-Keng Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| | - Ting-Wei Wang
- School of Medicine, College of Medicine, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Chia-Fung Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei 112304, Taiwan;
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Man-Wei Hua
- Department of Otolaryngology-Head and Neck Surgery, Taichung Veterans General Hospital, Taichung 407219, Taiwan
| |
Collapse
|
8
|
Ren CX, Xu GX, Dai DQ, Lin L, Sun Y, Liu QS. Cross-site prognosis prediction for nasopharyngeal carcinoma from incomplete multi-modal data. Med Image Anal 2024; 93:103103. [PMID: 38368752 DOI: 10.1016/j.media.2024.103103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 12/05/2023] [Accepted: 02/05/2024] [Indexed: 02/20/2024]
Abstract
Accurate prognosis prediction for nasopharyngeal carcinoma based on magnetic resonance (MR) images assists in the guidance of treatment intensity, thus reducing the risk of recurrence and death. To reduce repeated labor and sufficiently explore domain knowledge, aggregating labeled/annotated data from external sites enables us to train an intelligent model for a clinical site with unlabeled data. However, this task suffers from the challenges of incomplete multi-modal examination data fusion and image data heterogeneity among sites. This paper proposes a cross-site survival analysis method for prognosis prediction of nasopharyngeal carcinoma from domain adaptation viewpoint. Utilizing a Cox model as the basic framework, our method equips it with a cross-attention based multi-modal fusion regularization. This regularization model effectively fuses the multi-modal information from multi-parametric MR images and clinical features onto a domain-adaptive space, despite the absence of some modalities. To enhance the feature discrimination, we also extend the contrastive learning technique to censored data cases. Compared with the conventional approaches which directly deploy a trained survival model in a new site, our method achieves superior prognosis prediction performance in cross-site validation experiments. These results highlight the key role of cross-site adaptability of our method and support its value in clinical practice.
Collapse
Affiliation(s)
- Chuan-Xian Ren
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China.
| | - Geng-Xin Xu
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
| | - Dao-Qing Dai
- School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
| | - Li Lin
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Qing-Shan Liu
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| |
Collapse
|
9
|
Castro GA, Almeida JM, Machado-Neto JA, Almeida TA. A decision support system to recommend appropriate therapy protocol for AML patients. Front Artif Intell 2024; 7:1343447. [PMID: 38510471 PMCID: PMC10950921 DOI: 10.3389/frai.2024.1343447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 02/19/2024] [Indexed: 03/22/2024] Open
Abstract
Introduction Acute Myeloid Leukemia (AML) is one of the most aggressive hematological neoplasms, emphasizing the critical need for early detection and strategic treatment planning. The association between prompt intervention and enhanced patient survival rates underscores the pivotal role of therapy decisions. To determine the treatment protocol, specialists heavily rely on prognostic predictions that consider the response to treatment and clinical outcomes. The existing risk classification system categorizes patients into favorable, intermediate, and adverse groups, forming the basis for personalized therapeutic choices. However, accurately assessing the intermediate-risk group poses significant challenges, potentially resulting in treatment delays and deterioration of patient conditions. Methods This study introduces a decision support system leveraging cutting-edge machine learning techniques to address these issues. The system automatically recommends tailored oncology therapy protocols based on outcome predictions. Results The proposed approach achieved a high performance close to 0.9 in F1-Score and AUC. The model generated with gene expression data exhibited superior performance. Discussion Our system can effectively support specialists in making well-informed decisions regarding the most suitable and safe therapy for individual patients. The proposed decision support system has the potential to not only streamline treatment initiation but also contribute to prolonged survival and improved quality of life for individuals diagnosed with AML. This marks a significant stride toward optimizing therapeutic interventions and patient outcomes.
Collapse
Affiliation(s)
- Giovanna A. Castro
- Department of Computer Science, Federal University of São Carlos (UFSCar) Sorocaba, São Paulo, Brazil
| | - Jade M. Almeida
- Department of Computer Science, Federal University of São Carlos (UFSCar) Sorocaba, São Paulo, Brazil
| | - João A. Machado-Neto
- Institute of Biomedical Sciences, The University of São Paulo (USP), São Paulo, Brazil
| | - Tiago A. Almeida
- Department of Computer Science, Federal University of São Carlos (UFSCar) Sorocaba, São Paulo, Brazil
| |
Collapse
|
10
|
Peng L, Chen B, Yu E, Lin Y, Lin J, Zheng D, Fu Y, Chen Z, Zheng H, Zhan Z, Chen Y. The application value of LAVA-flex sequences in enhanced MRI scans of nasopharyngeal carcinoma: comparison with T1WI-IDEAL. Front Oncol 2024; 14:1320280. [PMID: 38420018 PMCID: PMC10899686 DOI: 10.3389/fonc.2024.1320280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/16/2024] [Indexed: 03/02/2024] Open
Abstract
Introduction Magnetic resonance imaging (MRI) staging scans are critical for the diagnosis and treatment of patients with nasopharyngeal cancer (NPC). We aimed to evaluate the application value of LAVA-Flex and T1WI-IDEAL sequences in MRI staging scans. Methods Eighty-four newly diagnosed NPC patients underwent both LAVA-Flex and T1WI-IDEAL sequences during MRI examinations. Two radiologists independently scored the acquisitions of image quality, fat suppression quality, artifacts, vascular and nerve display. The obtained scores were compared using the Wilcoxon signed rank test. According to the signal intensity (SI) measurements, the uniformity of fat suppression, contrast between tumor lesions and subcutaneous fat tissue, and signal-to-noise ratio (SNR) were compared by the paired t-test. Results Compared to the T1WI-IDEAL sequence, LAVA-Flex exhibited fewer artifacts (P<0.05), better visualization of nerves and vessels (P<0.05), and performed superior in the fat contrast ratio of the primary lesion and metastatic lymph nodes (0.80 vs. 0.52, 0.81 vs. 0.56, separately, P<0.001). There was no statistically significant difference in overall image quality, tumor signal-to-noise ratio (SNR), muscle SNR, and the detection rate of lesions between the two sequences (P>0.05). T1WI-IDEAL was superior to LAVA-Flex in the evaluation of fat suppression uniformity (P<0.05). Discussion LAVA-Flex sequence provides satisfactory image quality and better visualization of nerves and vessels for NPC with shorter scanning times.
Collapse
Affiliation(s)
- Li Peng
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Bijuan Chen
- Department of Radiation Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Erhan Yu
- Department of Neurology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Yifei Lin
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Jiahao Lin
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Dechun Zheng
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Yu Fu
- School of Basic Medical Sciences of Fujian Medical University, Fuzhou, Fujian, China
| | - Zhipeng Chen
- School of Basic Medical Sciences of Fujian Medical University, Fuzhou, Fujian, China
| | - Hanchen Zheng
- Department of Medical Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Zhouwei Zhan
- Department of Medical Oncology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Yunbin Chen
- Department of Radiology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian, China
| |
Collapse
|
11
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
12
|
Gomaa A, Huang Y, Hagag A, Schmitter C, Höfler D, Weissmann T, Breininger K, Schmidt M, Stritzelberger J, Delev D, Coras R, Dörfler A, Schnell O, Frey B, Gaipl US, Semrau S, Bert C, Hau P, Fietkau R, Putz F. Comprehensive multimodal deep learning survival prediction enabled by a transformer architecture: A multicenter study in glioblastoma. Neurooncol Adv 2024; 6:vdae122. [PMID: 39156618 PMCID: PMC11327617 DOI: 10.1093/noajnl/vdae122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/20/2024] Open
Abstract
Background This research aims to improve glioblastoma survival prediction by integrating MR images, clinical, and molecular-pathologic data in a transformer-based deep learning model, addressing data heterogeneity and performance generalizability. Methods We propose and evaluate a transformer-based nonlinear and nonproportional survival prediction model. The model employs self-supervised learning techniques to effectively encode the high-dimensional MRI input for integration with nonimaging data using cross-attention. To demonstrate model generalizability, the model is assessed with the time-dependent concordance index (Cdt) in 2 training setups using 3 independent public test sets: UPenn-GBM, UCSF-PDGM, and Rio Hortega University Hospital (RHUH)-GBM, each comprising 378, 366, and 36 cases, respectively. Results The proposed transformer model achieved a promising performance for imaging as well as nonimaging data, effectively integrating both modalities for enhanced performance (UCSF-PDGM test-set, imaging Cdt 0.578, multimodal Cdt 0.672) while outperforming state-of-the-art late-fusion 3D-CNN-based models. Consistent performance was observed across the 3 independent multicenter test sets with Cdt values of 0.707 (UPenn-GBM, internal test set), 0.672 (UCSF-PDGM, first external test set), and 0.618 (RHUH-GBM, second external test set). The model achieved significant discrimination between patients with favorable and unfavorable survival for all 3 datasets (log-rank P 1.9 × 10-8, 9.7 × 10-3, and 1.2 × 10-2). Comparable results were obtained in the second setup using UCSF-PDGM for training/internal testing and UPenn-GBM and RHUH-GBM for external testing (Cdt 0.670, 0.638, and 0.621). Conclusions The proposed transformer-based survival prediction model integrates complementary information from diverse input modalities, contributing to improved glioblastoma survival prediction compared to state-of-the-art methods. Consistent performance was observed across institutions supporting model generalizability.
Collapse
Affiliation(s)
- Ahmed Gomaa
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- The Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Yixing Huang
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- The Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Amr Hagag
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Charlotte Schmitter
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Daniel Höfler
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Thomas Weissmann
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Katharina Breininger
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Manuel Schmidt
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Institute of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- The Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Jenny Stritzelberger
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Department of Neurology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Daniel Delev
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Department of Neurosurgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Roland Coras
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Institute for Neuropathology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Arnd Dörfler
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Institute of Neuroradiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Oliver Schnell
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Department of Neurosurgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Udo S Gaipl
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- FAU Profile Center Immunomedicine (FAU I-MED), Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Erlangen, Germany
| | - Sabine Semrau
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Peter Hau
- The Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Rainer Fietkau
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- The Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| |
Collapse
|
13
|
Sun R, Wei L, Hou X, Chen Y, Han B, Xie Y, Nie S. Molecular-subtype guided automatic invasive breast cancer grading using dynamic contrast-enhanced MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107804. [PMID: 37716219 DOI: 10.1016/j.cmpb.2023.107804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 04/05/2023] [Accepted: 09/05/2023] [Indexed: 09/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Histological grade and molecular subtype have presented valuable references in assigning personalized or precision medicine as the significant prognostic indicators representing biological behaviors of invasive breast cancer (IBC). To evaluate a two-stage deep learning framework for IBC grading that incorporates with molecular-subtype (MS) information using DCE-MRI. METHODS In Stage I, an innovative neural network called IOS2-DA is developed, which includes a dense atrous-spatial pyramid pooling block with a pooling layer (DA) and inception-octconved blocks with double kernel squeeze-and-excitations (IOS2). This method focuses on the imaging manifestation of IBC grades and performs preliminary prediction using a novel class F1-score loss function. In Stage II, a MS attention branch is introduced to fine-tune the integrated deep vectors from IOS2-DA via Kullback-Leibler divergence. The MS-guided information is weighted with preliminary results to obtain classification values, which are analyzed by ensemble learning for tumor grade prediction on three MRI post-contrast series. Objective assessment is quantitatively evaluated by receiver operating characteristic curve analysis. DeLong test is applied to measure statistical significance (P < 0.05). RESULTS The molecular-subtype guided IOS2-DA performs significantly better than the single IOS2-DA in terms of accuracy (0.927), precision (0.942), AUC (0.927, 95% CI: [0.908, 0.946]), and F1-score (0.930). The gradient-weighted class activation maps show that the feature representations extracted from IOS2-DA are consistent with tumor areas. CONCLUSIONS IOS2-DA elucidates its potential in non-invasive tumor grade prediction. With respect to the correlation between MS and histological grade, it exhibits remarkable clinical prospects in the application of relevant clinical biomarkers to enhance the diagnostic effectiveness of IBC grading. Therefore, DCE-MRI tends to be a feasible imaging modality for the thorough preoperative assessment of breast biological behavior and carcinoma prognosis.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai 200093, China
| | - Long Wei
- School of Computer Science and Technology, Shandong Jianzhu University, Shandong, China
| | - Xuewen Hou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai 200093, China
| | - Yang Chen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai 200093, China
| | - Baosan Han
- Department of General Surgery, Xinhua Hospital, Affiliated with Shanghai Jiao Tong University School of Medicine, China.
| | - Yuanzhong Xie
- Medical Imaging Center, Tai'an Central Hospital, No. 29 Long-Tan Road, Shandong 271099, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai 200093, China.
| |
Collapse
|
14
|
Yang X, Wu J, Chen X. Application of Artificial Intelligence to the Diagnosis and Therapy of Nasopharyngeal Carcinoma. J Clin Med 2023; 12:jcm12093077. [PMID: 37176518 PMCID: PMC10178972 DOI: 10.3390/jcm12093077] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023] Open
Abstract
Artificial intelligence (AI) is an interdisciplinary field that encompasses a wide range of computer science disciplines, including image recognition, machine learning, human-computer interaction, robotics and so on. Recently, AI, especially deep learning algorithms, has shown excellent performance in the field of image recognition, being able to automatically perform quantitative evaluation of complex medical image features to improve diagnostic accuracy and efficiency. AI has a wider and deeper application in the medical field of diagnosis, treatment and prognosis. Nasopharyngeal carcinoma (NPC) occurs frequently in southern China and Southeast Asian countries and is the most common head and neck cancer in the region. Detecting and treating NPC early is crucial for a good prognosis. This paper describes the basic concepts of AI, including traditional machine learning and deep learning algorithms, and their clinical applications of detecting and assessing NPC lesions, facilitating treatment and predicting prognosis. The main limitations of current AI technologies are briefly described, including interpretability issues, privacy and security and the need for large amounts of annotated data. Finally, we discuss the remaining challenges and the promising future of using AI to diagnose and treat NPC.
Collapse
Affiliation(s)
- Xinggang Yang
- Division of Biotherapy, Cancer Center, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Juan Wu
- Out-Patient Department, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Guoxue Road 37, Chengdu 610041, China
| |
Collapse
|
15
|
Guo S, Zhang H, Gao Y, Wang H, Xu L, Gao Z, Guzzo A, Fortino G. Survival prediction of heart failure patients using motion-based analysis method. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107547. [PMID: 37126888 DOI: 10.1016/j.cmpb.2023.107547] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/06/2023] [Accepted: 04/09/2023] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Survival prediction of heart failure patients is critical to improve the prognostic management of the cardiovascular disease. The existing survival prediction methods focus on the clinical information while lacking the cardiac motion information. we propose a motion-based analysis method to predict the survival risk of heart failure patients for aiding clinical diagnosis and treatment. METHODS We propose a motion-based analysis method for survival prediction of heart failure patients. First, our method proposes the hierarchical spatial-temporal structure to capture the myocardial border. It promotes the model discrimination on border features. Second, our method explores the dense optical flow structure to capture motion fields. It improves the tracking capability on cardiac images. The cardiac motion information is obtained by fusing boundary information and motion fields of cardiac images. Finally, our method proposes the multi-modality deep-cox structure to predict the survival risk of heart failure patients. It improves the survival probability of heart failure patients. RESULTS The motion-based analysis method is confirmed to be able to improve the survival prediction of heart failure patients. The precision, recall, F1-score, and C-index are 0.8519, 0.8333, 0.8425, and 0.8478, respectively, which is superior to other state-of-the-art methods. CONCLUSIONS The experimental results show that the proposed model can effectively predict survival risk of heart failure patients. It facilitates the application of robust clinical treatment strategies.
Collapse
Affiliation(s)
- Saidi Guo
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Heye Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China.
| | - Yifeng Gao
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Lei Xu
- Department of Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Antonella Guzzo
- Department of Informatics, Modeling, Electronics and Systems Engineering (DIMES), University of Calabria, Rende, Italy
| | - Giancarlo Fortino
- Department of Informatics, Modeling, Electronics and Systems Engineering (DIMES), University of Calabria, Rende, Italy
| |
Collapse
|
16
|
Li S, Wan X, Deng YQ, Hua HL, Li SL, Chen XX, Zeng ML, Zha Y, Tao ZZ. Predicting prognosis of nasopharyngeal carcinoma based on deep learning: peritumoral region should be valued. Cancer Imaging 2023; 23:14. [PMID: 36759889 PMCID: PMC9912633 DOI: 10.1186/s40644-023-00530-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 02/01/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND The purpose of this study was to explore whether incorporating the peritumoral region to train deep neural networks could improve the performance of the models for predicting the prognosis of NPC. METHODS A total of 381 NPC patients who were divided into high- and low-risk groups according to progression-free survival were retrospectively included. Deeplab v3 and U-Net were trained to build segmentation models for the automatic segmentation of the tumor and suspicious lymph nodes. Five datasets were constructed by expanding 5, 10, 20, 40, and 60 pixels outward from the edge of the automatically segmented region. Inception-Resnet-V2, ECA-ResNet50t, EfficientNet-B3, and EfficientNet-B0 were trained with the original, segmented, and the five new constructed datasets to establish the classification models. The receiver operating characteristic curve was used to evaluate the performance of each model. RESULTS The Dice coefficients of Deeplab v3 and U-Net were 0.741(95%CI:0.722-0.760) and 0.737(95%CI:0.720-0.754), respectively. The average areas under the curve (aAUCs) of deep learning models for classification trained with the original and segmented images and with images expanded by 5, 10, 20, 40, and 60 pixels were 0.717 ± 0.043, 0.739 ± 0.016, 0.760 ± 0.010, 0.768 ± 0.018, 0.802 ± 0.013, 0.782 ± 0.039, and 0.753 ± 0.014, respectively. The models trained with the images expanded by 20 pixels obtained the best performance. CONCLUSIONS The peritumoral region NPC contains information related to prognosis, and the incorporation of this region could improve the performance of deep learning models for prognosis prediction.
Collapse
Affiliation(s)
- Song Li
- grid.89957.3a0000 0000 9255 8984Department of Otorhinolaryngology, The First Affiliated Hospital, Nanjing Medical University, Nanjing, 210029 China ,grid.412632.00000 0004 1758 2270Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060 P.R. China
| | - Xia Wan
- grid.510937.9Department of Otolaryngology-Head & Neck Surgery, Ezhou Central Hospital, No. 9 Wenxing Road, Ezhou, 436000 P.R. China
| | - Yu-Qin Deng
- grid.412632.00000 0004 1758 2270Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060 P.R. China
| | - Hong-Li Hua
- grid.412632.00000 0004 1758 2270Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060 P.R. China
| | - Sheng-Lan Li
- grid.412632.00000 0004 1758 2270Department of Radiology, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060 P.R. China
| | - Xi-Xiang Chen
- grid.412632.00000 0004 1758 2270Department of Radiology, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei 430060 P.R. China
| | - Man-Li Zeng
- grid.510937.9Department of Otolaryngology-Head & Neck Surgery, Ezhou Central Hospital, No. 9 Wenxing Road, Ezhou, 436000 P.R. China
| | - Yunfei Zha
- Department of Radiology, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei, 430060, P.R. China.
| | - Ze-Zhang Tao
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan, Hubei, 430060, P.R. China.
| |
Collapse
|
17
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
18
|
Gu B, Meng M, Bi L, Kim J, Feng DD, Song S. Prediction of 5-year progression-free survival in advanced nasopharyngeal carcinoma with pretreatment PET/CT using multi-modality deep learning-based radiomics. Front Oncol 2022; 12:899351. [PMID: 35965589 PMCID: PMC9372795 DOI: 10.3389/fonc.2022.899351] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 06/28/2022] [Indexed: 11/18/2022] Open
Abstract
Objective Deep learning-based radiomics (DLR) has achieved great success in medical image analysis and has been considered a replacement for conventional radiomics that relies on handcrafted features. In this study, we aimed to explore the capability of DLR for the prediction of 5-year progression-free survival (PFS) in advanced nasopharyngeal carcinoma (NPC) using pretreatment PET/CT images. Methods A total of 257 patients (170/87 patients in internal/external cohorts) with advanced NPC (TNM stage III or IVa) were enrolled. We developed an end-to-end multi-modality DLR model, in which a 3D convolutional neural network was optimized to extract deep features from pretreatment PET/CT images and predict the probability of 5-year PFS. The TNM stage, as a high-level clinical feature, could be integrated into our DLR model to further improve the prognostic performance. For a comparison between conventional radiomics and DLR, 1,456 handcrafted features were extracted, and optimal conventional radiomics methods were selected from 54 cross-combinations of six feature selection methods and nine classification methods. In addition, risk group stratification was performed with clinical signature, conventional radiomics signature, and DLR signature. Results Our multi-modality DLR model using both PET and CT achieved higher prognostic performance (area under the receiver operating characteristic curve (AUC) = 0.842 ± 0.034 and 0.823 ± 0.012 for the internal and external cohorts) than the optimal conventional radiomics method (AUC = 0.796 ± 0.033 and 0.782 ± 0.012). Furthermore, the multi-modality DLR model outperformed single-modality DLR models using only PET (AUC = 0.818 ± 0.029 and 0.796 ± 0.009) or only CT (AUC = 0.657 ± 0.055 and 0.645 ± 0.021). For risk group stratification, the conventional radiomics signature and DLR signature enabled significant difference between the high- and low-risk patient groups in both the internal and external cohorts (p < 0.001), while the clinical signature failed in the external cohort (p = 0.177). Conclusion Our study identified potential prognostic tools for survival prediction in advanced NPC, which suggests that DLR could provide complementary values to the current TNM staging.
Collapse
Affiliation(s)
- Bingxin Gu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Center for Biomedical Imaging, Fudan University, Shanghai, China
- Shanghai Engineering Research Center of Molecular Imaging Probes, Shanghai, China
- Key Laboratory of Nuclear Physics and Ion-beam Application Ministry of Education (MOE), Fudan University, Shanghai, China
| | - Mingyuan Meng
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Lei Bi
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Jinman Kim
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - David Dagan Feng
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Shaoli Song
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Center for Biomedical Imaging, Fudan University, Shanghai, China
- Shanghai Engineering Research Center of Molecular Imaging Probes, Shanghai, China
- Key Laboratory of Nuclear Physics and Ion-beam Application Ministry of Education (MOE), Fudan University, Shanghai, China
- Department of Nuclear Medicine, Shanghai Proton and Heavy Ion Center, Shanghai, China
| |
Collapse
|
19
|
Pei W, Wang C, Liao H, Chen X, Wei Y, Huang X, Liang X, Bao H, Su D, Jin G. MRI-based random survival Forest model improves prediction of progression-free survival to induction chemotherapy plus concurrent Chemoradiotherapy in Locoregionally Advanced nasopharyngeal carcinoma. BMC Cancer 2022; 22:739. [PMID: 35794590 PMCID: PMC9261049 DOI: 10.1186/s12885-022-09832-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 06/27/2022] [Indexed: 12/08/2022] Open
Abstract
Background The present study aimed to explore the application value of random survival forest (RSF) model and Cox model in predicting the progression-free survival (PFS) among patients with locoregionally advanced nasopharyngeal carcinoma (LANPC) after induction chemotherapy plus concurrent chemoradiotherapy (IC + CCRT). Methods Eligible LANPC patients underwent magnetic resonance imaging (MRI) scan before treatment were subjected to radiomics feature extraction. Radiomics and clinical features of patients in the training cohort were subjected to RSF analysis to predict PFS and were tested in the testing cohort. The performance of an RSF model with clinical and radiologic predictors was assessed with the area under the receiver operating characteristic (ROC) curve (AUC) and Delong test and compared with Cox models based on clinical and radiologic parameters. Further, the Kaplan-Meier method was used for risk stratification of patients. Results A total of 294 LANPC patients (206 in the training cohort; 88 in the testing cohort) were enrolled and underwent magnetic resonance imaging (MRI) scans before treatment. The AUC value of the clinical Cox model, radiomics Cox model, clinical + radiomics Cox model, and clinical + radiomics RSF model in predicting 3- and 5-year PFS for LANPC patients was [0.545 vs 0.648 vs 0.648 vs 0.899 (training cohort), and 0.566 vs 0.736 vs 0.730 vs 0.861 (testing cohort); 0.556 vs 0.604 vs 0.611 vs 0.897 (training cohort), and 0.591 vs 0.661 vs 0.676 vs 0.847 (testing cohort), respectively]. Delong test showed that the RSF model and the other three Cox models were statistically significant, and the RSF model markedly improved prediction performance (P < 0.001). Additionally, the PFS of the high-risk group was lower than that of the low-risk group in the RSF model (P < 0.001), while comparable in the Cox model (P > 0.05). Conclusion The RSF model may be a potential tool for prognostic prediction and risk stratification of LANPC patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-022-09832-6.
Collapse
|
20
|
Meng M, Gu B, Bi L, Song S, Feng DD, Kim J. DeepMTS: Deep Multi-task Learning for Survival Prediction in Patients With Advanced Nasopharyngeal Carcinoma Using Pretreatment PET/CT. IEEE J Biomed Health Inform 2022; 26:4497-4507. [PMID: 35696469 DOI: 10.1109/jbhi.2022.3181791] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Nasopharyngeal Carcinoma (NPC) is a malignant epithelial cancer arising from the nasopharynx. Survival prediction is a major concern for NPC patients, as it provides early prognostic information to plan treatments. Recently, deep survival models based on deep learning have demonstrated the potential to outperform traditional radiomics-based survival prediction models. Deep survival models usually use image patches covering the whole target regions (e.g., nasopharynx for NPC) or containing only segmented tumor regions as the input. However, the models using the whole target regions will also include non-relevant background information, while the models using segmented tumor regions will disregard potentially prognostic information existing out of primary tumors (e.g., local lymph node metastasis and adjacent tissue invasion). In this study, we propose a 3D end-to-end Deep Multi-Task Survival model (DeepMTS) for joint survival prediction and tumor segmentation in advanced NPC from pretreatment PET/CT. Our novelty is the introduction of a hard-sharing segmentation backbone to guide the extraction of local features related to the primary tumors, which reduces the interference from non-relevant background information. In addition, we also introduce a cascaded survival network to capture the prognostic information existing out of primary tumors and further leverage the global tumor information (e.g., tumor size, shape, and locations) derived from the segmentation backbone. Our experiments with two clinical datasets demonstrate that our DeepMTS can consistently outperform traditional radiomics-based survival prediction models and existing deep survival models.
Collapse
|
21
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
22
|
Wang W, Wang F, Chen Q, Ouyang S, Iwamoto Y, Han X, Lin L, Hu H, Tong R, Chen YW. Phase Attention Model for Prediction of Early Recurrence of Hepatocellular Carcinoma With Multi-Phase CT Images and Clinical Data. FRONTIERS IN RADIOLOGY 2022; 2:856460. [PMID: 37492657 PMCID: PMC10365106 DOI: 10.3389/fradi.2022.856460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 02/24/2022] [Indexed: 07/27/2023]
Abstract
Hepatocellular carcinoma (HCC) is a primary liver cancer that produces a high mortality rate. It is one of the most common malignancies worldwide, especially in Asia, Africa, and southern Europe. Although surgical resection is an effective treatment, patients with HCC are at risk of recurrence after surgery. Preoperative early recurrence prediction for patients with liver cancer can help physicians develop treatment plans and will enable physicians to guide patients in postoperative follow-up. However, the conventional clinical data based methods ignore the imaging information of patients. Certain studies have used radiomic models for early recurrence prediction in HCC patients with good results, and the medical images of patients have been shown to be effective in predicting the recurrence of HCC. In recent years, deep learning models have demonstrated the potential to outperform the radiomics-based models. In this paper, we propose a prediction model based on deep learning that contains intra-phase attention and inter-phase attention. Intra-phase attention focuses on important information of different channels and space in the same phase, whereas inter-phase attention focuses on important information between different phases. We also propose a fusion model to combine the image features with clinical data. Our experiment results prove that our fusion model has superior performance over the models that use clinical data only or the CT image only. Our model achieved a prediction accuracy of 81.2%, and the area under the curve was 0.869.
Collapse
Affiliation(s)
- Weibin Wang
- Graduate School of Information Science and Engineering, Ritsumeikan University, Kusatsu, Japan
| | - Fang Wang
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, China
| | - Qingqing Chen
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, China
| | - Shuyi Ouyang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yutaro Iwamoto
- Graduate School of Information Science and Engineering, Ritsumeikan University, Kusatsu, Japan
| | - Xianhua Han
- Graduate School of Information Science and Engineering, Yamaguchi University, Yamaguchi-shi, Japan
| | - Lanfen Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Hongjie Hu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University, Hangzhou, China
| | - Ruofeng Tong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- Zhejiang Lab, Research Center for Healthcare Data Science, Hangzhou, China
| | - Yen-Wei Chen
- Graduate School of Information Science and Engineering, Ritsumeikan University, Kusatsu, Japan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- Zhejiang Lab, Research Center for Healthcare Data Science, Hangzhou, China
| |
Collapse
|
23
|
Bao D, Liu Z, Geng Y, Li L, Xu H, Zhang Y, Hu L, Zhao X, Zhao Y, Luo D. Baseline MRI-based radiomics model assisted predicting disease progression in nasopharyngeal carcinoma patients with complete response after treatment. Cancer Imaging 2022; 22:10. [PMID: 35090572 PMCID: PMC8800208 DOI: 10.1186/s40644-022-00448-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 12/31/2021] [Indexed: 12/04/2022] Open
Abstract
Background Accurate pretreatment prediction for disease progression of nasopharyngeal carcinoma is key to intensify therapeutic strategies to high-risk individuals. Our aim was to evaluate the value of baseline MRI-based radiomics machine-learning models in predicting the disease progression in nasopharyngeal carcinoma patients who achieved complete response after treatment. Methods In this retrospective study, 171 patients with pathologically confirmed nasopharyngeal carcinoma were included. Using hold-out cross validation scheme (7:3), relevant radiomic features were selected with the least absolute shrinkage and selection operator method based on baseline T2-weighted fat suppression and contrast-enhanced T1-weighted images in the training cohort. After Pearson’s correlation analysis of selected radiomic features, multivariate logistic regression analysis was applied to radiomic features and clinical characteristics selection. Logistic regression analysis and support vector machine classifier were utilized to build the predictive model respectively. The predictive accuracy of the model was evaluated by ROC analysis along with sensitivity, specificity and AUC calculated in the validation cohort. Results A prediction model using logistic regression analysis comprising 4 radiomics features (HGLZE_T2H, HGLZE_T1, LDLGLE_T1, and GLNU_T1) and 5 clinical features (histology, T stage, N stage, smoking history, and age) showed the best performance with an AUC of 0.75 in the training cohort (95% CI: 0.66–0.83) and 0.77 in the validation cohort (95% CI: 0.64–0.90). The nine independent impact factors were entered into the nomogram. The calibration curves for probability of 3-year disease progression showed good agreement. The features of this prediction model showed satisfactory clinical utility with decision curve analysis. Conclusions A radiomics model derived from pretreatment MR showed good performance for predicting disease progression in nasopharyngeal carcinoma and may help to improve clinical decision making. Supplementary Information The online version contains supplementary material available at 10.1186/s40644-022-00448-4.
Collapse
|
24
|
Ng WT, But B, Choi HCW, de Bree R, Lee AWM, Lee VHF, López F, Mäkitie AA, Rodrigo JP, Saba NF, Tsang RKY, Ferlito A. Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management - A Systematic Review. Cancer Manag Res 2022; 14:339-366. [PMID: 35115832 PMCID: PMC8801370 DOI: 10.2147/cmar.s341583] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 12/25/2021] [Indexed: 12/15/2022] Open
Abstract
INTRODUCTION Nasopharyngeal carcinoma (NPC) is endemic to Eastern and South-Eastern Asia, and, in 2020, 77% of global cases were diagnosed in these regions. Apart from its distinct epidemiology, the natural behavior, treatment, and prognosis are different from other head and neck cancers. With the growing trend of artificial intelligence (AI), especially deep learning (DL), in head and neck cancer care, we sought to explore the unique clinical application and implementation direction of AI in the management of NPC. METHODS The search protocol was performed to collect publications using AI, machine learning (ML) and DL in NPC management from PubMed, Scopus and Embase. The articles were filtered using inclusion and exclusion criteria, and the quality of the papers was assessed. Data were extracted from the finalized articles. RESULTS A total of 78 articles were reviewed after removing duplicates and papers that did not meet the inclusion and exclusion criteria. After quality assessment, 60 papers were included in the current study. There were four main types of applications, which were auto-contouring, diagnosis, prognosis, and miscellaneous applications (especially on radiotherapy planning). The different forms of convolutional neural networks (CNNs) accounted for the majority of DL algorithms used, while the artificial neural network (ANN) was the most frequent ML model implemented. CONCLUSION There is an overall positive impact identified from AI implementation in the management of NPC. With improving AI algorithms, we envisage AI will be available as a routine application in a clinical setting soon.
Collapse
Affiliation(s)
- Wai Tong Ng
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, People’s Republic of China
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Barton But
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Horace C W Choi
- Department of Public Health, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Remco de Bree
- Department of Head and Neck Surgical Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Anne W M Lee
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, People’s Republic of China
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Victor H F Lee
- Clinical Oncology Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen, People’s Republic of China
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, China
| | - Fernando López
- Department of Otolaryngology, Hospital Universitario Central de Asturias (HUCA), Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Instituto Universitario de Oncología del Principado de Asturias (IUOPA), University of Oviedo, Oviedo, 33011, Spain
- Spanish Biomedical Research Network Centre in Oncology, CIBERONC, Madrid, 28029, Spain
| | - Antti A Mäkitie
- Department of Otorhinolaryngology - Head and Neck Surgery, HUS Helsinki University Hospital and University of Helsinki, Helsinki, Finland
- Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Division of Ear, Nose and Throat Diseases, Department of Clinical Sciences, Intervention and Technology, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden
| | - Juan P Rodrigo
- Department of Otolaryngology, Hospital Universitario Central de Asturias (HUCA), Instituto de Investigación Sanitaria del Principado de Asturias (ISPA), Instituto Universitario de Oncología del Principado de Asturias (IUOPA), University of Oviedo, Oviedo, 33011, Spain
- Spanish Biomedical Research Network Centre in Oncology, CIBERONC, Madrid, 28029, Spain
| | - Nabil F Saba
- Department of Hematology and Medical Oncology, Emory University School of Medicine, Atlanta, GA, USA
| | - Raymond K Y Tsang
- Division of Otorhinolaryngology, Department of Surgery, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, People's Republic of China
| | - Alfio Ferlito
- Coordinator of the International Head and Neck Scientific Group, Padua, Italy
| |
Collapse
|
25
|
Li S, Deng YQ, Zhu ZL, Hua HL, Tao ZZ. A Comprehensive Review on Radiomics and Deep Learning for Nasopharyngeal Carcinoma Imaging. Diagnostics (Basel) 2021; 11:1523. [PMID: 34573865 PMCID: PMC8465998 DOI: 10.3390/diagnostics11091523] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 08/10/2021] [Accepted: 08/19/2021] [Indexed: 12/23/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is one of the most common malignant tumours of the head and neck, and improving the efficiency of its diagnosis and treatment strategies is an important goal. With the development of the combination of artificial intelligence (AI) technology and medical imaging in recent years, an increasing number of studies have been conducted on image analysis of NPC using AI tools, especially radiomics and artificial neural network methods. In this review, we present a comprehensive overview of NPC imaging research based on radiomics and deep learning. These studies depict a promising prospect for the diagnosis and treatment of NPC. The deficiencies of the current studies and the potential of radiomics and deep learning for NPC imaging are discussed. We conclude that future research should establish a large-scale labelled dataset of NPC images and that studies focused on screening for NPC using AI are necessary.
Collapse
Affiliation(s)
- Song Li
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Yu-Qin Deng
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Zhi-Ling Zhu
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital Affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China;
| | - Hong-Li Hua
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| | - Ze-Zhang Tao
- Department of Otolaryngology-Head and Neck Surgery, Renmin Hospital of Wuhan University, 238 Jie-Fang Road, Wuhan 430060, China; (S.L.); (Y.-Q.D.); (H.-L.H.)
| |
Collapse
|
26
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|