1
|
Li L, Yu J, Li Y, Wei J, Fan R, Wu D, Ye Y. Multi-sequence generative adversarial network: better generation for enhanced magnetic resonance imaging images. Front Comput Neurosci 2024; 18:1365238. [PMID: 38841427 PMCID: PMC11151883 DOI: 10.3389/fncom.2024.1365238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/27/2024] [Indexed: 06/07/2024] Open
Abstract
Introduction MRI is one of the commonly used diagnostic methods in clinical practice, especially in brain diseases. There are many sequences in MRI, but T1CE images can only be obtained by using contrast agents. Many patients (such as cancer patients) must undergo alignment of multiple MRI sequences for diagnosis, especially the contrast-enhanced magnetic resonance sequence. However, some patients such as pregnant women, children, etc. find it difficult to use contrast agents to obtain enhanced sequences, and contrast agents have many adverse reactions, which can pose a significant risk. With the continuous development of deep learning, the emergence of generative adversarial networks makes it possible to extract features from one type of image to generate another type of image. Methods We propose a generative adversarial network model with multimodal inputs and end-to-end decoding based on the pix2pix model. For the pix2pix model, we used four evaluation metrics: NMSE, RMSE, SSIM, and PNSR to assess the effectiveness of our generated model. Results Through statistical analysis, we compared our proposed new model with pix2pix and found significant differences between the two. Our model outperformed pix2pix, with higher SSIM and PNSR, lower NMSE and RMSE. We also found that the input of T1W images and T2W images had better effects than other combinations, providing new ideas for subsequent work on generating magnetic resonance enhancement sequence images. By using our model, it is possible to generate magnetic resonance enhanced sequence images based on magnetic resonance non-enhanced sequence images. Discussion This has significant implications as it can greatly reduce the use of contrast agents to protect populations such as pregnant women and children who are contraindicated for contrast agents. Additionally, contrast agents are relatively expensive, and this generation method may bring about substantial economic benefits.
Collapse
Affiliation(s)
- Leizi Li
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Jingchun Yu
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Yijin Li
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Jinbo Wei
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
| | - Ruifang Fan
- Guangzhou Key Laboratory of Subtropical Biodiversity and Biomonitoring and Guangdong Provincial Engineering Technology Research Center for Drug and Food Biological Resources Processing and Comprehensive Utilization, School of Life Sciences, South China Normal University, Guangzhou, China
| | - Dieen Wu
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
| | - Yufeng Ye
- South China Normal University-Panyu Central Hospital Joint Laboratory of Basic and Translational Medical Research, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| |
Collapse
|
2
|
Tang L, Zhang Z, Yang J, Feng Y, Sun S, Liu B, Ma J, Liu J, Shao H. A New Automated Prognostic Prediction Method Based on Multi-Sequence Magnetic Resonance Imaging for Hepatic Resection of Colorectal Cancer Liver Metastases. IEEE J Biomed Health Inform 2024; 28:1528-1539. [PMID: 38446655 DOI: 10.1109/jbhi.2024.3350247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Colorectal cancer is a prevalent and life-threatening disease, where colorectal cancer liver metastasis (CRLM) exhibits the highest mortality rate. Currently, surgery stands as the most effective curative option for eligible patients. However, due to the insufficient performance of traditional methods and the lack of multi-modality MRI feature complementarity in existing deep learning methods, the prognosis of CRLM surgical resection has not been fully explored. This paper proposes a new method, multi-modal guided complementary network (MGCNet), which employs multi-sequence MRI to predict 1-year recurrence and recurrence-free survival in patients after CRLM resection. In light of the complexity and redundancy of features in the liver region, we designed the multi-modal guided local feature fusion module to utilize the tumor features to guide the dynamic fusion of prognostically relevant local features within the liver. On the other hand, to solve the loss of spatial information during multi-sequence MRI fusion, the cross-modal complementary external attention module designed an external mask branch to establish inter-layer correlation. The results show that the model has accuracy (ACC) of 0.79, the area under the curve (AUC) of 0.84, C-Index of 0.73, and hazard ratio (HR) of 4.0, which is a significant improvement over state-of-the-art methods. Additionally, MGCNet exhibits good interpretability.
Collapse
|
3
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
4
|
Pan I, Huang RY. Artificial intelligence in neuroimaging of brain tumors: reality or still promise? Curr Opin Neurol 2023; 36:549-556. [PMID: 37973024 DOI: 10.1097/wco.0000000000001213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
PURPOSE OF REVIEW To provide an updated overview of artificial intelligence (AI) applications in neuro-oncologic imaging and discuss current barriers to wider clinical adoption. RECENT FINDINGS A wide variety of AI applications in neuro-oncologic imaging have been developed and researched, spanning tasks from pretreatment brain tumor classification and segmentation, preoperative planning, radiogenomics, prognostication and survival prediction, posttreatment surveillance, and differentiating between pseudoprogression and true disease progression. While earlier studies were largely based on data from a single institution, more recent studies have demonstrated that the performance of these algorithms are also effective on external data from other institutions. Nevertheless, most of these algorithms have yet to see widespread clinical adoption, given the lack of prospective studies demonstrating their efficacy and the logistical difficulties involved in clinical implementation. SUMMARY While there has been significant progress in AI and neuro-oncologic imaging, clinical utility remains to be demonstrated. The next wave of progress in this area will be driven by prospective studies measuring outcomes relevant to clinical practice and go beyond retrospective studies which primarily aim to demonstrate high performance.
Collapse
Affiliation(s)
- Ian Pan
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School
| | | |
Collapse
|
5
|
Zhang S, Yin L, Ma L, Sun H. Artificial Intelligence Applications in Glioma With 1p/19q Co-Deletion: A Systematic Review. J Magn Reson Imaging 2023; 58:1338-1352. [PMID: 37083159 DOI: 10.1002/jmri.28737] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/02/2023] [Accepted: 04/03/2023] [Indexed: 04/22/2023] Open
Abstract
As an important genomic marker for oligodendrogliomas, early determination of 1p/19q co-deletion status is critical for guiding therapy and predicting prognosis in patients with glioma. The purpose of this study is to systematically review the literature concerning the magnetic resonance imaging (MRI) with artificial intelligence (AI) methods for predicting 1p/19q co-deletion status in glioma. PubMed, Scopus, Embase, and IEEE Xplore were searched in accordance with the Preferred Reporting Items for systematic reviews and meta-analyses guidelines. Methodological quality of studies was assessed according to the Quality Assessment of Diagnostic Accuracy Studies-2. Finally, 28 studies were included in the quantitative analysis. Diagnostic test accuracy reached an area under the ROC curve of 0.71-0.98 were reported in 24 studies. The remaining four studies with no available AUC provided an accuracy of 0.75-0. 89. The included studies varied widely in terms of imaging sequences, input features, and modeling methods. The current review highlighted that integrating MRI with AI technology is a potential tool for determination 1p/19q status pre-operatively and noninvasively, which can possibly help clinical decision-making. However, the reliability and feasibility of this approach still need to be further validated and improved in a real clinical setting. EVIDENCE LEVEL: 2. TECHNICAL EFFICACY: 2.
Collapse
Affiliation(s)
- Simin Zhang
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Lijuan Yin
- Department of Pathology, West China Hospital of Sichuan University, Chengdu, China
| | - Lu Ma
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, China
| | - Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| |
Collapse
|
6
|
Liu Y, Wu M. Deep learning in precision medicine and focus on glioma. Bioeng Transl Med 2023; 8:e10553. [PMID: 37693051 PMCID: PMC10486341 DOI: 10.1002/btm2.10553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 09/12/2023] Open
Abstract
Deep learning (DL) has been successfully applied to different fields for a range of tasks. In medicine, DL methods have been also used to improve the efficiency of disease diagnosis. In this review, we first summarize the history of the development of artificial intelligence models, demonstrate the features of the subtypes of machine learning and different DL networks, and then explore their application in the different fields of precision medicine, such as cardiology, gastroenterology, ophthalmology, dermatology, and oncology. By digging more information and extracting multilevel features from medical data, we found that DL helps doctors assess diseases automatically and monitor patients' physical health. In gliomas, research regarding application prospect of DL was mainly shown through magnetic resonance imaging and then by pathological slides. However, multi-omics data, such as whole exome sequence, RNA sequence, proteomics, and epigenomics, have not been covered thus far. In general, the quality and quantity of DL datasets still need further improvements, and more fruitful multi-omics characteristics will bring more comprehensive and accurate diagnosis in precision medicine and glioma.
Collapse
Affiliation(s)
- Yihao Liu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| | - Minghua Wu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| |
Collapse
|
7
|
Long ZC, Ding XC, Zhang XB, Sun PP, Hao FR, Li ZR, Hu M. The Efficacy of Pretreatment 18F-FDG PET-CT-Based Deep Learning Network Structure to Predict Survival in Nasopharyngeal Carcinoma. Clin Med Insights Oncol 2023; 17:11795549231171793. [PMID: 37251551 PMCID: PMC10214083 DOI: 10.1177/11795549231171793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 04/10/2023] [Indexed: 05/31/2023] Open
Abstract
Background Previous studies have shown that the 5-year survival rates of patients with nasopharyngeal carcinoma (NPC) were still not ideal despite great improvement in NPC treatments. To achieve individualized treatment of NPC, we have been looking for novel models to predict the prognosis of patients with NPC. The objective of this study was to use a novel deep learning network structural model to predict the prognosis of patients with NPC and to compare it with the traditional PET-CT model combining metabolic parameters and clinical factors. Methods A total of 173 patients were admitted to 2 institutions between July 2014 and April 2020 for the retrospective study; each received a PET-CT scan before treatment. The least absolute shrinkage and selection operator (LASSO) was employed to select some features, including SUVpeak-P, T3, age, stage II, MTV-P, N1, stage III and pathological type, which were associated with overall survival (OS) of patients. We constructed 2 survival prediction models: an improved optimized adaptive multimodal task (a 3D Coordinate Attention Convolutional Autoencoder and an uncertainty-based jointly Optimizing Cox Model, CACA-UOCM for short) and a clinical model. The predictive power of these models was assessed using the Harrell Consistency Index (C index). Overall survival of patients with NPC was compared by Kaplan-Meier and Log-rank tests. Results The results showed that CACA-UOCM model could estimate OS (C index, 0.779 for training, 0.774 for validation, and 0.819 for testing) and divide patients into low and high mortality risk groups, which were significantly associated with OS (P < .001). However, the C-index of the model based only on clinical variables was only 0.42. Conclusions The deep learning network model based on 18F-FDG PET/CT can serve as a reliable and powerful predictive tool for NPC and provide therapeutic strategies for individual treatment.
Collapse
Affiliation(s)
- Zi-Chan Long
- Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xing-Chen Ding
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xian-Bin Zhang
- Department of General Surgery and Integrated Chinese and Western Medicine, Institute of Precision Diagnosis and Treatment of Gastrointestinal Tumors, Carson International Cancer Center, Shenzhen University General Hospital, Shenzhen University, Shenzhen, China
| | - Peng-Peng Sun
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Fu-Rong Hao
- Department of Radiation Oncology, Weifang People's Hospital, Weifang, China
| | | | - Man Hu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
8
|
Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91:110-123. [PMID: 36907387 DOI: 10.1016/j.semcancer.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 03/12/2023]
Abstract
Glioma represents a dominant primary intracranial malignancy in the central nervous system. Artificial intelligence that mainly includes machine learning, and deep learning computational approaches, presents a unique opportunity to enhance clinical management of glioma through improving tumor segmentation, diagnosis, differentiation, grading, treatment, prediction of clinical outcomes (prognosis, and recurrence), molecular features, clinical classification, characterization of the tumor microenvironment, and drug discovery. A growing body of recent studies apply artificial intelligence-based models to disparate data sources of glioma, covering imaging modalities, digital pathology, high-throughput multi-omics data (especially emerging single-cell RNA sequencing and spatial transcriptome), etc. While these early findings are promising, future studies are required to normalize artificial intelligence-based models to improve the generalizability and interpretability of the results. Despite prominent issues, targeted clinical application of artificial intelligence approaches in glioma will facilitate the development of precision medicine of this field. If these challenges can be overcome, artificial intelligence has the potential to profoundly change the way patients with or at risk of glioma are provided with more rational care.
Collapse
Affiliation(s)
- Jiefeng Luo
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Mika Pan
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Ke Mo
- Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Yingwei Mao
- Department of Biology, Pennsylvania State University, University Park, PA 16802, USA.
| | - Donghua Zou
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China; Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China.
| |
Collapse
|
9
|
Gheisari M, Ebrahimzadeh F, Rahimi M, Moazzamigodarzi M, Liu Y, Dutta Pramanik PK, Heravi MA, Mehbodniya A, Ghaderzadeh M, Feylizadeh MR, Kosari S. Deep learning: Applications, architectures, models, tools, and frameworks: A comprehensive survey. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2023. [DOI: 10.1049/cit2.12180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
Affiliation(s)
- Mehdi Gheisari
- School of Computer Science and Technology Harbin Institute of Technology (Shenzhen) Shenzhen China
- Department of Cognitive Computing, Institute of Computer Science and Engineering, Saveetha School of Engineering Saveetha Institute of Medical and Technical Sciences Chennai India
- Department of Computer Science Islamic Azad University Tehran Iran
| | | | - Mohamadtaghi Rahimi
- Department of Mathematics and Statistics Iran University of Science and Technology Tehran Iran
| | | | - Yang Liu
- School of Computer Science and Technology Harbin Institute of Technology (Shenzhen) Shenzhen China
- Peng Cheng Laboratory Shenzhen China
| | | | | | - Abolfazl Mehbodniya
- Department of Electronics and Communications Engineering Kuwait College of Science and Technology Doha District Kuwait
| | - Mustafa Ghaderzadeh
- Department of Artificial Intelligence Smart University of Medical Sciences Tehran Iran
| | | | - Saeed Kosari
- Institute of Computing Science and Technology, Guangzhou University Guangzhou China
| |
Collapse
|
10
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
11
|
Yan J, Sun Q, Tan X, Liang C, Bai H, Duan W, Mu T, Guo Y, Qiu Y, Wang W, Yao Q, Pei D, Zhao Y, Liu D, Duan J, Chen S, Sun C, Wang W, Liu Z, Hong X, Wang X, Guo Y, Xu Y, Liu X, Cheng J, Li ZC, Zhang Z. Image-based deep learning identifies glioblastoma risk groups with genomic and transcriptomic heterogeneity: a multi-center study. Eur Radiol 2023; 33:904-914. [PMID: 36001125 DOI: 10.1007/s00330-022-09066-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 07/20/2022] [Accepted: 07/25/2022] [Indexed: 02/03/2023]
Abstract
OBJECTIVES To develop and validate a deep learning imaging signature (DLIS) for risk stratification in patients with multiforme (GBM), and to investigate the biological pathways and genetic alterations underlying the DLIS. METHODS The DLIS was developed from multi-parametric MRI based on a training set (n = 600) and validated on an internal validation set (n = 164), an external test set 1 (n = 100), an external test set 2 (n = 161), and a public TCIA set (n = 88). A co-profiling framework based on a radiogenomics analysis dataset (n = 127) using multiscale high-dimensional data, including imaging, transcriptome, and genome, was established to uncover the biological pathways and genetic alterations underpinning the DLIS. RESULTS The DLIS was associated with survival (log-rank p < 0.001) and was an independent predictor (p < 0.001). The integrated nomogram incorporating the DLIS achieved improved C indices than the clinicomolecular nomogram (net reclassification improvement 0.39, p < 0.001). DLIS significantly correlated with core pathways of GBM (apoptosis and cell cycle-related P53 and RB pathways, and cell proliferation-related RTK pathway), as well as key genetic alterations (del_CDNK2A). The prognostic value of DLIS-correlated genes was externally confirmed on TCGA/CGGA sets (p < 0.01). CONCLUSIONS Our study offers a biologically interpretable deep learning predictor of survival outcomes in patients with GBM, which is crucial for better understanding GBM patient's prognosis and guiding individualized treatment. KEY POINTS • MRI-based deep learning imaging signature (DLIS) stratifies GBM into risk groups with distinct molecular characteristics. • DLIS is associated with P53, RB, and RTK pathways and del_CDNK2A mutation. • The prognostic value of DLIS-correlated pathway genes is externally demonstrated.
Collapse
Affiliation(s)
- Jing Yan
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Qiuchang Sun
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Xiangliang Tan
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Chaofeng Liang
- Department of Neurosurgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510630, China
| | - Hongmin Bai
- Department of Neurosurgery, Guangzhou General Hospital of Guangzhou Military Command, Guangzhou, 510010, China
| | - Wenchao Duan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Tianhao Mu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,HaploX Biotechnology, Shenzhen, Guangdong, China
| | - Yang Guo
- Department of Neurosurgery, Henan Provincial Hospital, Zhengzhou, 450052, Henan Province, China
| | - Yuning Qiu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Weiwei Wang
- Department of Pathology, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan Province, China
| | - Qiaoli Yao
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Dongling Pei
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Yuanshen Zhao
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Danni Liu
- HaploX Biotechnology, Shenzhen, Guangdong, China
| | - Jingxian Duan
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Shifu Chen
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,HaploX Biotechnology, Shenzhen, Guangdong, China
| | - Chen Sun
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Wenqing Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Zhen Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Xuanke Hong
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Xiangxiang Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Yu Guo
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Yikai Xu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xianzhi Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China
| | - Jingliang Cheng
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China.
| | - Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China. .,University of Chinese Academy of Sciences, Beijing, China. .,Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China.
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian she Dong Road 1, Zhengzhou, 450052, Henan province, China.
| |
Collapse
|
12
|
Multiview Deep Forest for Overall Survival Prediction in Cancer. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:7931321. [PMID: 36714327 PMCID: PMC9876666 DOI: 10.1155/2023/7931321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 12/16/2022] [Accepted: 01/03/2023] [Indexed: 01/19/2023]
Abstract
Overall survival (OS) in cancer is crucial for cancer treatment. Many machine learning methods have been applied to predict OS, but there are still the challenges of dealing with multiview data and overfitting. To overcome these problems, we propose a multiview deep forest (MVDF) in this paper. MVDF can learn the features of each view and fuse them with integrated learning and multiple kernel learning. Then, a gradient boost forest based on the information bottleneck theory is proposed to reduce redundant information and avoid overfitting. In addition, a pruning strategy for a cascaded forest is used to limit the impact of outlier data. Comprehensive experiments have been carried out on a data set from West China Hospital of Sichuan University and two public data sets. Results have demonstrated that our method outperforms the compared methods in predicting overall survival.
Collapse
|
13
|
Yan T, Yan Z, Liu L, Zhang X, Chen G, Xu F, Li Y, Zhang L, Peng M, Wang L, Li D, Zhao D. Survival prediction for patients with glioblastoma multiforme using a Cox proportional hazards denoising autoencoder network. Front Comput Neurosci 2023; 16:916511. [PMID: 36704230 PMCID: PMC9871481 DOI: 10.3389/fncom.2022.916511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 12/13/2022] [Indexed: 01/11/2023] Open
Abstract
Objectives This study aimed to establish and validate a prognostic model based on magnetic resonance imaging and clinical features to predict the survival time of patients with glioblastoma multiforme (GBM). Methods In this study, a convolutional denoising autoencoder (DAE) network combined with the loss function of the Cox proportional hazard regression model was used to extract features for survival prediction. In addition, the Kaplan-Meier curve, the Schoenfeld residual analysis, the time-dependent receiver operating characteristic curve, the nomogram, and the calibration curve were performed to assess the survival prediction ability. Results The concordance index (C-index) of the survival prediction model, which combines the DAE and the Cox proportional hazard regression model, reached 0.78 in the training set, 0.75 in the validation set, and 0.74 in the test set. Patients were divided into high- and low-risk groups based on the median prognostic index (PI). Kaplan-Meier curve was used for survival analysis (p = < 2e-16 in the training set, p = 3e-04 in the validation set, and p = 0.007 in the test set), which showed that the survival probability of different groups was significantly different, and the PI of the network played an influential role in the prediction of survival probability. In the residual verification of the PI, the fitting curve of the scatter plot was roughly parallel to the x-axis, and the p-value of the test was 0.11, proving that the PI and survival time were independent of each other and the survival prediction ability of the PI was less affected than survival time. The areas under the curve of the training set were 0.843, 0.871, 0.903, and 0.941; those of the validation set were 0.687, 0.895, 1.000, and 0.967; and those of the test set were 0.757, 0.852, 0.683, and 0.898. Conclusion The survival prediction model, which combines the DAE and the Cox proportional hazard regression model, can effectively predict the prognosis of patients with GBM.
Collapse
Affiliation(s)
- Ting Yan
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Zhenpeng Yan
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Lili Liu
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Xiaoyu Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Guohui Chen
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Feng Xu
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Ying Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Lijuan Zhang
- Shanxi Provincial People's Hospital, Taiyuan, China
| | - Meilan Peng
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Lu Wang
- Key Laboratory of Cellular Physiology of the Ministry of Education, Department of Pathology, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Dandan Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China,*Correspondence: Dandan Li ✉
| | - Dong Zhao
- Department of Stomatology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China,Dong Zhao ✉
| |
Collapse
|
14
|
Liang S, Dong X, Yang K, Chu Z, Tang F, Ye F, Chen B, Guan J, Zhang Y. A multi-perspective information aggregation network for automated T-staging detection of nasopharyngeal carcinoma. Phys Med Biol 2022; 67. [PMID: 36541557 DOI: 10.1088/1361-6560/aca516] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 11/22/2022] [Indexed: 11/23/2022]
Abstract
AccurateT-staging is important when planning personalized radiotherapy. However,T-staging via manual slice-by-slice inspection is time-consuming while tumor sizes and shapes are heterogeneous, and junior physicians find such inspection challenging. With inspiration from oncological diagnostics, we developed a multi-perspective aggregation network that incorporated various diagnosis-oriented knowledge which allowed automated nasopharyngeal carcinomaT-staging detection (TSD Net). Specifically, our TSD Net was designed in multi-branch architecture, which can capture tumor size and shape information (basic knowledge), strongly correlated contextual features, and associations between the tumor and surrounding tissues. We defined the association between the tumor and surrounding tissues by a signed distance map which can embed points and tumor contours in higher-dimensional spaces, yielding valuable information regarding the locations of tissue associations. TSD Net finally outputs aT1-T4 stage prediction by aggregating data from the three branches. We evaluated TSD Net by using the T1-weighted contrast-enhanced magnetic resonance imaging database of 320 patients in a three-fold cross-validation manner. The results show that the proposed method achieves a mean area under the curve (AUC) as high as 87.95%. We also compared our method to traditional classifiers and a deep learning-based method. Our TSD Net is efficient and accurate and outperforms other methods.
Collapse
Affiliation(s)
- Shujun Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Xiuyu Dong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Kaifan Yang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Zhiqin Chu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Fan Tang
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Feng Ye
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Bei Chen
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Jian Guan
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| |
Collapse
|
15
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Wardak Z, Timmerman R, Dan T, Lu W, Gu X. Ensemble learning for glioma patients overall survival prediction using pre-operative MRIs. Phys Med Biol 2022; 67:10.1088/1361-6560/aca375. [PMID: 36384039 PMCID: PMC9990877 DOI: 10.1088/1361-6560/aca375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 11/16/2022] [Indexed: 11/18/2022]
Abstract
Objective: Gliomas are the most common primary brain tumors. Approximately 70% of the glioma patients diagnosed with glioblastoma have an averaged overall survival (OS) of only ∼16 months. Early survival prediction is essential for treatment decision-making in glioma patients. Here we proposed an ensemble learning approach to predict the post-operative OS of glioma patients using only pre-operative MRIs.Approach: Our dataset was from the Medical Image Computing and Computer Assisted Intervention Brain Tumor Segmentation challenge 2020, which consists of multimodal pre-operative MRI scans of 235 glioma patients with survival days recorded. The backbone of our approach was a Siamese network consisting of twinned ResNet-based feature extractors followed by a 3-layer classifier. During training, the feature extractors explored traits of intra and inter-class by minimizing contrastive loss of randomly paired 2D pre-operative MRIs, and the classifier utilized the extracted features to generate labels with cost defined by cross-entropy loss. During testing, the extracted features were also utilized to define distance between the test sample and the reference composed of training data, to generate an additional predictor via K-NN classification. The final label was the ensemble classification from both the Siamese model and the K-NN model.Main results: Our approach classifies the glioma patients into 3 OS classes: long-survivors (>15 months), mid-survivors (between 10 and 15 months) and short-survivors (<10 months). The performance is assessed by the accuracy (ACC) and the area under the curve (AUC) of 3-class classification. The final result achieved an ACC of 65.22% and AUC of 0.81.Significance: Our Siamese network based ensemble learning approach demonstrated promising ability in mining discriminative features with minimal manual processing and generalization requirement. This prediction strategy can be potentially applied to assist timely clinical decision-making.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Radiation Oncology, Stanford University, Palo Alto, CA 94305, USA
| |
Collapse
|
16
|
Yang L, Du D, Zheng T, Liu L, Wang Z, Du J, Yi H, Cui Y, Liu D, Fang Y. Deep learning and radiomics to predict the mitotic index of gastrointestinal stromal tumors based on multiparametric MRI. Front Oncol 2022; 12:948557. [PMID: 36505814 PMCID: PMC9727176 DOI: 10.3389/fonc.2022.948557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 11/02/2022] [Indexed: 11/24/2022] Open
Abstract
Introduction Preoperative evaluation of the mitotic index (MI) of gastrointestinal stromal tumors (GISTs) represents the basis of individualized treatment of patients. However, the accuracy of conventional preoperative imaging methods is limited. The aim of this study was to develop a predictive model based on multiparametric MRI for preoperative MI prediction. Methods A total of 112 patients who were pathologically diagnosed with GIST were enrolled in this study. The dataset was subdivided into the development (n = 81) and test (n = 31) sets based on the time of diagnosis. With the use of T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) map, a convolutional neural network (CNN)-based classifier was developed for MI prediction, which used a hybrid approach based on 2D tumor images and radiomics features from 3D tumor shape. The trained model was tested on an internal test set. Then, the hybrid model was comprehensively tested and compared with the conventional ResNet, shape radiomics classifier, and age plus diameter classifier. Results The hybrid model showed good MI prediction ability at the image level; the area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), and accuracy in the test set were 0.947 (95% confidence interval [CI]: 0.927-0.968), 0.964 (95% CI: 0.930-0.978), and 90.8 (95% CI: 88.0-93.0), respectively. With the average probabilities from multiple samples per patient, good performance was also achieved at the patient level, with AUROC, AUPRC, and accuracy of 0.930 (95% CI: 0.828-1.000), 0.941 (95% CI: 0.792-1.000), and 93.6% (95% CI: 79.3-98.2) in the test set, respectively. Discussion The deep learning-based hybrid model demonstrated the potential to be a good tool for the operative and non-invasive prediction of MI in GIST patients.
Collapse
Affiliation(s)
- Linsha Yang
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Dan Du
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Tao Zheng
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Lanxiang Liu
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Zhanqiu Wang
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Juan Du
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Huiling Yi
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Yujie Cui
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China
| | - Defeng Liu
- Medical Imaging Center, The First Hospital of Qinhuangdao, Qinhuangdao, China,*Correspondence: Defeng Liu, ; Yuan Fang,
| | - Yuan Fang
- Medical Imaging Center, Chongqing Yubei District People’s Hospital, Chongqing, China,*Correspondence: Defeng Liu, ; Yuan Fang,
| |
Collapse
|
17
|
di Noia C, Grist JT, Riemer F, Lyasheva M, Fabozzi M, Castelli M, Lodi R, Tonon C, Rundo L, Zaccagna F. Predicting Survival in Patients with Brain Tumors: Current State-of-the-Art of AI Methods Applied to MRI. Diagnostics (Basel) 2022; 12:diagnostics12092125. [PMID: 36140526 PMCID: PMC9497964 DOI: 10.3390/diagnostics12092125] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/05/2022] [Accepted: 08/17/2022] [Indexed: 11/24/2022] Open
Abstract
Given growing clinical needs, in recent years Artificial Intelligence (AI) techniques have increasingly been used to define the best approaches for survival assessment and prediction in patients with brain tumors. Advances in computational resources, and the collection of (mainly) public databases, have promoted this rapid development. This narrative review of the current state-of-the-art aimed to survey current applications of AI in predicting survival in patients with brain tumors, with a focus on Magnetic Resonance Imaging (MRI). An extensive search was performed on PubMed and Google Scholar using a Boolean research query based on MeSH terms and restricting the search to the period between 2012 and 2022. Fifty studies were selected, mainly based on Machine Learning (ML), Deep Learning (DL), radiomics-based methods, and methods that exploit traditional imaging techniques for survival assessment. In addition, we focused on two distinct tasks related to survival assessment: the first on the classification of subjects into survival classes (short and long-term or eventually short, mid and long-term) to stratify patients in distinct groups. The second focused on quantification, in days or months, of the individual survival interval. Our survey showed excellent state-of-the-art methods for the first, with accuracy up to ∼98%. The latter task appears to be the most challenging, but state-of-the-art techniques showed promising results, albeit with limitations, with C-Index up to ∼0.91. In conclusion, according to the specific task, the available computational methods perform differently, and the choice of the best one to use is non-univocal and dependent on many aspects. Unequivocally, the use of features derived from quantitative imaging has been shown to be advantageous for AI applications, including survival prediction. This evidence from the literature motivates further research in the field of AI-powered methods for survival prediction in patients with brain tumors, in particular, using the wealth of information provided by quantitative MRI techniques.
Collapse
Affiliation(s)
- Christian di Noia
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
| | - James T. Grist
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford OX1 3PT, UK
- Department of Radiology, Oxford University Hospitals NHS Foundation Trust, Oxford OX3 9DU, UK
- Oxford Centre for Clinical Magnetic Research Imaging, University of Oxford, Oxford OX3 9DU, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Birmingham B15 2SY, UK
| | - Frank Riemer
- Mohn Medical Imaging and Visualization Centre (MMIV), Department of Radiology, Haukeland University Hospital, N-5021 Bergen, Norway
| | - Maria Lyasheva
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK
| | - Miriana Fabozzi
- Centro Medico Polispecialistico (CMO), 80058 Torre Annunziata, Italy
| | - Mauro Castelli
- NOVA Information Management School (NOVA IMS), Universidade NOVA de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum—University of Bologna, 40125 Bologna, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, 40139 Bologna, Italy
- Correspondence: ; Tel.: +39-0514969951
| |
Collapse
|
18
|
Zhan B, Zhou L, Li Z, Wu X, Pu Y, Zhou J, Wang Y, Shen D. D2FE-GAN: Decoupled dual feature extraction based GAN for MRI image synthesis. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
19
|
Ghareeb WM, Draz E, Madbouly K, Hussein AH, Faisal M, Elkashef W, Emile MH, Edelhamre M, Kim SH, Emile SH. Deep Neural Network for the Prediction of KRAS Genotype in Rectal Cancer. J Am Coll Surg 2022; 235:482-493. [PMID: 35972169 DOI: 10.1097/xcs.0000000000000277] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND KRAS mutation can alter the treatment plan after resection of colorectal cancer. Despite its importance, the KRAS status of several patients remains unchecked because of the high cost and limited resources. This study developed a deep neural network (DNN) to predict the KRAS genotype using hematoxylin and eosin (H&E)-stained histopathological images. STUDY DESIGN Three DNNs were created (KRAS_Mob, KRAS_Shuff, and KRAS_Ince) using the structural backbone of the MobileNet, ShuffleNet, and Inception networks, respectively. The Cancer Genome Atlas was screened to extract 49,684 image tiles that were used for deep learning and internal validation. An independent cohort of 43,032 image tiles was used for external validation. The performance was compared with humans, and a virtual cost-saving analysis was done. RESULTS The KRAS_Mob network (area under the receiver operating curve [AUC] 0.8, 95% CI 0.71 to 0.89) was the best-performing model for predicting the KRAS genotype, followed by the KRAS_Shuff (AUC 0.73, 95% CI 0.62 to 0.84) and KRAS_Ince (AUC 0.71, 95% CI 0.6 to 0.82) networks. Combing the KRAS_Mob and KRAS_Shuff networks as a double prediction approach showed improved performance. KRAS_Mob network accuracy surpassed that of two independent pathologists (AUC 0.79 [95% CI 0.64 to 0.93], 0.51 [95% CI 0.34 to 0.69], and 0.51 (95% CI 0.34 to 0.69]; p < 0.001 for all comparisons). CONCLUSION The DNN has the potential to predict the KRAS genotype directly from H&E-stained histopathological slide images. As an algorithmic screening method to prioritize patients for laboratory confirmation, such a model might possibly reduce the number of patients screened, resulting in significant test-related time and economic savings.
Collapse
Affiliation(s)
- Waleed M Ghareeb
- From the Gastrointestinal Surgery Unit (Ghareeb, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Laboratory of Applied Artificial Intelligence in Medical Disciplines (Ghareeb, Draz, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
| | - Eman Draz
- Laboratory of Applied Artificial Intelligence in Medical Disciplines (Ghareeb, Draz, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Department of Surgery, and Department of Human Anatomy and Embryology (Draz), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Key Laboratory of Stem Cell Engineering and Regenerative Medicine, Department of Human Anatomy and Histoembryology, Fujian Medical University, 350122, Fujian Province, Fuzhou City, P.R. China (Draz)
| | - Khaled Madbouly
- Colorectal Surgery Unit, Alexandria University, Faculty of Medicine, Alexandria, Egypt (Madbouly)
| | - Ahmed H Hussein
- From the Gastrointestinal Surgery Unit (Ghareeb, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Laboratory of Applied Artificial Intelligence in Medical Disciplines (Ghareeb, Draz, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
| | - Mohammed Faisal
- Surgical Oncology Unit (Faisal), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- General Surgery Department, Sahlgrenska University Hospital, Gothenburg, Sweden (Faisal)
| | - Wagdi Elkashef
- Department of Pathology, Faculty of Medicine (Elkashef, M Hany Emile), Mansoura University, Mansoura, Egypt
| | - Mona Hany Emile
- Department of Pathology, Faculty of Medicine (Elkashef, M Hany Emile), Mansoura University, Mansoura, Egypt
| | - Marcus Edelhamre
- the Department of Surgery, Helsingborg Hospital, University of Lund, 25187 Helsingborg, Sweden (Edelhamre)
| | - Seon Hahn Kim
- From the Gastrointestinal Surgery Unit (Ghareeb, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Laboratory of Applied Artificial Intelligence in Medical Disciplines (Ghareeb, Draz, Hussein), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Surgical Oncology Unit (Faisal), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Department of Surgery, and Department of Human Anatomy and Embryology (Draz), Faculty of Medicine, Suez Canal University Hospitals, Ismaila, Egypt
- Key Laboratory of Stem Cell Engineering and Regenerative Medicine, Department of Human Anatomy and Histoembryology, Fujian Medical University, 350122, Fujian Province, Fuzhou City, P.R. China (Draz)
- Colorectal Surgery Unit, Alexandria University, Faculty of Medicine, Alexandria, Egypt (Madbouly)
- General Surgery Department, Sahlgrenska University Hospital, Gothenburg, Sweden (Faisal)
- Department of Pathology, Faculty of Medicine (Elkashef, M Hany Emile), Mansoura University, Mansoura, Egypt
- Colorectal Surgery Unit, General Surgery Department (S Hany Emile), Mansoura University, Mansoura, Egypt
- the Department of Surgery, Helsingborg Hospital, University of Lund, 25187 Helsingborg, Sweden (Edelhamre)
| | - Sameh Hany Emile
- Colorectal Surgery Unit, General Surgery Department (S Hany Emile), Mansoura University, Mansoura, Egypt
| |
Collapse
|
20
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
21
|
Li ZC, Yan J, Zhang S, Liang C, Lv X, Zou Y, Zhang H, Liang D, Zhang Z, Chen Y. Glioma survival prediction from whole-brain MRI without tumor segmentation using deep attention network: a multicenter study. Eur Radiol 2022; 32:5719-5729. [PMID: 35278123 DOI: 10.1007/s00330-022-08640-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 01/10/2022] [Accepted: 02/02/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVES To develop and validate a deep learning model for predicting overall survival from whole-brain MRI without tumor segmentation in patients with diffuse gliomas. METHODS In this multicenter retrospective study, two deep learning models were built for survival prediction from MRI, including a DeepRisk model built from whole-brain MRI, and an original ResNet model built from expert-segmented tumor images. Both models were developed using a training dataset (n = 935) and an internal tuning dataset (n = 156) and tested on two external test datasets (n = 194 and 150) and a TCIA dataset (n = 121). C-index, integrated Brier score (IBS), prediction error curves, and calibration curves were used to assess the model performance. RESULTS In total, 1556 patients were enrolled (age, 49.0 ± 13.1 years; 830 male). The DeepRisk score was an independent predictor and can stratify patients in each test dataset into three risk subgroups. The IBS and C-index for DeepRisk were 0.14 and 0.83 in external test dataset 1, 0.15 and 0.80 in external dataset 2, and 0.16 and 0.77 in TCIA dataset, respectively, which were comparable with those for original ResNet. The AUCs at 6, 12, 24, 26, and 48 months for DeepRisk ranged between 0.77 and 0.94. Combining DeepRisk score with clinicomolecular factors resulted in a nomogram with a better calibration and classification accuracy (net reclassification improvement 0.69, p < 0.001) than the clinical nomogram. CONCLUSIONS DeepRisk that obviated the need of tumor segmentation can predict glioma survival from whole-brain MRI and offers incremental prognostic value. KEY POINTS • DeepRisk can predict overall survival directly from whole-brain MRI without tumor segmentation. • DeepRisk achieves comparable accuracy in survival prediction with deep learning model built using expert-segmented tumor images. • DeepRisk has independent and incremental prognostic value over existing clinical parameters and IDH mutation status.
Collapse
Affiliation(s)
- Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
| | - Jing Yan
- Department of MRI, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Shenghai Zhang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chaofeng Liang
- Department of Neurosurgery, The 3rd Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Xiaofei Lv
- Department of Medical Imaging, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Yan Zou
- Department of Radiology, The 3rd Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Huailing Zhang
- School of Information Engineering, Guangdong Medical University, Dongguan, China
| | - Dong Liang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, 1 Jian she Dong Road, Zhengzhou, 450052, Henan, China.
| | - Yinsheng Chen
- Department of Neurosurgery/Neuro-oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng East Road, Guangzhou, 510060, China.
| |
Collapse
|
22
|
Zhang L, Zhong L, Li C, Zhang W, Hu C, Dong D, Liu Z, Zhou J, Tian J. Knowledge-guided multi-task attention network for survival risk prediction using multi-center computed tomography images. Neural Netw 2022; 152:394-406. [DOI: 10.1016/j.neunet.2022.04.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 04/02/2022] [Accepted: 04/22/2022] [Indexed: 12/12/2022]
|
23
|
Tang Z, Cao H, Xu Y, Yang Q, Wang J, Zhang H. Overall survival time prediction for glioblastoma using multimodal deep KNN. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6e25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 05/09/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Glioblastoma (GBM) is a severe malignant brain tumor with bad prognosis, and overall survival (OS) time prediction is of great clinical value for customized treatment. Recently, many deep learning (DL) based methods have been proposed, and most of them build deep networks to directly map pre-operative images of patients to the OS time. However, such end-to-end prediction is sensitive to data inconsistency and noise. In this paper, inspired by the fact that clinicians usually evaluate patient prognosis according to previously encountered similar cases, we propose a novel multimodal deep KNN based OS time prediction method. Specifically, instead of the end-to-end prediction, for each input patient, our method first search its K nearest patients with known OS time in a learned metric space, and the final OS time of the input patient is jointly determined by the K nearest patients, which is robust to data inconsistency and noise. Moreover, to take advantage of multiple imaging modalities, a new inter-modality loss is introduced to encourage learning complementary features from different modalities. The in-house single-center dataset containing multimodal MR brain images of 78 GBM patients is used to evaluate our method. In addition, to demonstrate that our method is not limited to GBM, a public multi-center dataset (BRATS2019) containing 211 patients with low and high grade gliomas is also used in our experiment. As benefiting from the deep KNN and the inter-modality loss, our method outperforms all methods under evaluation in both datasets. To the best of our knowledge, this is the first work, which predicts the OS time of GBM patients in the strategy of KNN under the DL framework.
Collapse
|
24
|
Jian A, Liu S, Di Ieva A. Artificial Intelligence for Survival Prediction in Brain Tumors on Neuroimaging. Neurosurgery 2022; 91:8-26. [PMID: 35348129 DOI: 10.1227/neu.0000000000001938] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 01/08/2022] [Indexed: 12/30/2022] Open
Abstract
Survival prediction of patients affected by brain tumors provides essential information to guide surgical planning, adjuvant treatment selection, and patient counseling. Current reliance on clinical factors, such as Karnofsky Performance Status Scale, and simplistic radiological characteristics are, however, inadequate for survival prediction in tumors such as glioma that demonstrate molecular and clinical heterogeneity with variable survival outcomes. Advances in the domain of artificial intelligence have afforded powerful tools to capture a large number of hidden high-dimensional imaging features that reflect abundant information about tumor structure and physiology. Here, we provide an overview of current literature that apply computational analysis tools such as radiomics and machine learning methods to the pipeline of image preprocessing, tumor segmentation, feature extraction, and construction of classifiers to establish survival prediction models based on neuroimaging. We also discuss challenges relating to the development and evaluation of such models and explore ethical issues surrounding the future use of machine learning predictions.
Collapse
Affiliation(s)
- Anne Jian
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Royal Melbourne Hospital, Melbourne, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
- Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
25
|
Ai L, Bai W, Li M. TDABNet: Three-directional attention block network for the determination of IDH status in low- and high-grade gliomas from MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
26
|
Cho HH, Kim CK, Park H. Overview of radiomics in prostate imaging and future directions. Br J Radiol 2022; 95:20210539. [PMID: 34797688 PMCID: PMC8978251 DOI: 10.1259/bjr.20210539] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Recent advancements in imaging technology and analysis methods have led to an analytic framework known as radiomics. This framework extracts comprehensive high-dimensional features from imaging data and performs data mining to build analytical models for improved decision-support. Its features include many categories spanning texture and shape; thus, it can provide abundant information for precision medicine. Many studies of prostate radiomics have shown promising results in the assessment of pathological features, prediction of treatment response, and stratification of risk groups. Herein, we aimed to provide a general overview of radiomics procedures, discuss technical issues, explain various clinical applications, and suggest future research directions, especially for prostate imaging.
Collapse
Affiliation(s)
- Hwan-Ho Cho
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea.,School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
27
|
Ertl-Wagner B, Khalvati F. The data behind the image-Deep learning and its potential impact in neuro-oncological imaging. Neuro Oncol 2022; 24:300-301. [PMID: 34695189 PMCID: PMC8804883 DOI: 10.1093/neuonc/noab249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Birgit Ertl-Wagner
- Division of Neuroradiology, Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto, Ontario, Canada
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Ontario, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| | - Farzad Khalvati
- Division of Neuroradiology, Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto, Ontario, Canada
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Ontario, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
28
|
Huang J, Shlobin NA, DeCuypere M, Lam SK. Deep Learning for Outcome Prediction in Neurosurgery: A Systematic Review of Design, Reporting, and Reproducibility. Neurosurgery 2022; 90:16-38. [PMID: 34982868 DOI: 10.1227/neu.0000000000001736] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 08/18/2021] [Indexed: 02/06/2023] Open
Abstract
Deep learning (DL) is a powerful machine learning technique that has increasingly been used to predict surgical outcomes. However, the large quantity of data required and lack of model interpretability represent substantial barriers to the validity and reproducibility of DL models. The objective of this study was to systematically review the characteristics of DL studies involving neurosurgical outcome prediction and to assess their bias and reporting quality. Literature search using the PubMed, Scopus, and Embase databases identified 1949 records of which 35 studies were included. Of these, 32 (91%) developed and validated a DL model while 3 (9%) validated a pre-existing model. The most commonly represented subspecialty areas were oncology (16 of 35, 46%), spine (8 of 35, 23%), and vascular (6 of 35, 17%). Risk of bias was low in 18 studies (51%), unclear in 5 (14%), and high in 12 (34%), most commonly because of data quality deficiencies. Adherence to transparent reporting of a multivariable prediction model for individual prognosis or diagnosis reporting standards was low, with a median of 12 transparent reporting of a multivariable prediction model for individual prognosis or diagnosis items (39%) per study not reported. Model transparency was severely limited because code was provided in only 3 studies (9%) and final models in 2 (6%). With the exception of public databases, no study data sets were readily available. No studies described DL models as ready for clinical use. The use of DL for neurosurgical outcome prediction remains nascent. Lack of appropriate data sets poses a major concern for bias. Although studies have demonstrated promising results, greater transparency in model development and reporting is needed to facilitate reproducibility and validation.
Collapse
Affiliation(s)
- Jonathan Huang
- Ann and Robert H. Lurie Children's Hospital, Division of Pediatric Neurosurgery, Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA
| | | | | | | |
Collapse
|
29
|
Cho HH, Lee HY, Kim E, Lee G, Kim J, Kwon J, Park H. Radiomics-guided deep neural networks stratify lung adenocarcinoma prognosis from CT scans. Commun Biol 2021; 4:1286. [PMID: 34773070 PMCID: PMC8590002 DOI: 10.1038/s42003-021-02814-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 10/27/2021] [Indexed: 02/07/2023] Open
Abstract
Deep learning (DL) is a breakthrough technology for medical imaging with high sample size requirements and interpretability issues. Using a pretrained DL model through a radiomics-guided approach, we propose a methodology for stratifying the prognosis of lung adenocarcinomas based on pretreatment CT. Our approach allows us to apply DL with smaller sample size requirements and enhanced interpretability. Baseline radiomics and DL models for the prognosis of lung adenocarcinomas were developed and tested using local (n = 617) cohort. The DL models were further tested in an external validation (n = 70) cohort. The local cohort was divided into training and test cohorts. A radiomics risk score (RRS) was developed using Cox-LASSO. Three pretrained DL networks derived from natural images were used to extract the DL features. The features were further guided using radiomics by retaining those DL features whose correlations with the radiomics features were high and Bonferroni-corrected p-values were low. The retained DL features were subject to a Cox-LASSO when constructing DL risk scores (DRS). The risk groups stratified by the RRS and DRS showed a significant difference in training, testing, and validation cohorts. The DL features were interpreted using existing radiomics features, and the texture features explained the DL features well.
Collapse
Affiliation(s)
- Hwan-Ho Cho
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Ho Yun Lee
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
- Department of Health Sciences and Technology, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea.
| | - Eunjin Kim
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Geewon Lee
- Department of Radiology and Medical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, Republic of Korea
| | - Jonghoon Kim
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
| | - Junmo Kwon
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Republic of Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea.
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.
| |
Collapse
|
30
|
Fu Y, Xue P, Li N, Zhao P, Xu Z, Ji H, Zhang Z, Cui W, Dong E. Fusion of 3D lung CT and serum biomarkers for diagnosis of multiple pathological types on pulmonary nodules. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106381. [PMID: 34496322 DOI: 10.1016/j.cmpb.2021.106381] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 08/24/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Current researches on pulmonary nodules mainly focused on the binary-classification of benign and malignant pulmonary nodules. However, in clinical applications, it is not enough to judge whether pulmonary nodules are benign or malignant. In this paper, we proposed a fusion model based on the Lung Information Dataset Containing 3D CT Images and Serum Biomarkers (LIDCCISB) we constructed to accurately diagnose the types of pulmonary nodules in squamous cell carcinoma, adenocarcinoma, inflammation and other benign diseases. METHODS Using single modal information of lung 3D CT images and single modal information of Lung Tumor Biomarkers (LTBs) in LIDCCISB, a Multi-resolution 3D Multi-classification deep learning model (Mr-Mc) and a Multi-Layer Perceptron machine learning model (MLP) were constructed for diagnosing multiple pathological types of pulmonary nodules, respectively. To comprehensively use the double modal information of CT images and LTBs, we used transfer learning to fuse Mr-Mc and MLP, and constructed a multimodal information fusion model that could classify multiple pathological types of benign and malignant pulmonary nodules. RESULTS Experiments showed that the constructed Mr-Mc model can achieve an average accuracy of 0.805 and MLP model can achieve an average accuracy of 0.887. The fusion model was verified on a dataset containing 64 samples, and achieved an average accuracy of 0.906. CONCLUSIONS This is the first study to simultaneously use CT images and LTBs to diagnose multiple pathological types of benign and malignant pulmonary nodules, and experiments showed that our research was more advanced and more suitable for practical clinical applications.
Collapse
Affiliation(s)
- Yu Fu
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Peng Xue
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Ning Li
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China
| | - Peng Zhao
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China
| | - Zhuodong Xu
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China
| | - Huizhong Ji
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Zhili Zhang
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Wentao Cui
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China.
| | - Enqing Dong
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China.
| |
Collapse
|
31
|
Computerized Tomography Image Feature under Convolutional Neural Network Algorithm Evaluated for Therapeutic Effect of Clarithromycin Combined with Salmeterol/Fluticasone on Chronic Obstructive Pulmonary Disease. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:8563181. [PMID: 34381586 PMCID: PMC8352704 DOI: 10.1155/2021/8563181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 07/13/2021] [Accepted: 07/22/2021] [Indexed: 11/30/2022]
Abstract
This study was to explore the use of convolutional neural network (CNN) for the classification and recognition of computerized tomography (CT) images of chronic obstructive pulmonary disease (COPD) and the therapeutic effect of clarithromycin combined with salmeterol/fluticasone. First, the clinical data of COPD patients treated in hospital from September 2018 to December 2020 were collected, and CT and X-ray images were also collected. CT-CNN and X ray-CNN single modal models were constructed based on the LeNet-5 model. The randomized fusion algorithm was introduced to construct a fused CNN model for the diagnosis of COPD patients, and the recognition effect of the model was verified. Subsequently, the three-dimensional reconstruction of the patient's bronchus was performed using the classified CT images, and the changes of CT quantitative parameters in COPD patients were compared and analyzed. Finally, COPD patients were treated with salmeterol/fluticasone (COPD-C) and combined with clarithromycin (COPD-T). In addition, the differences between patients' lung function indexes, blood gas indexes, St. George respiratory questionnaire (SGRQ) scores, and the number of acute exacerbations (AECOPD) before and after treatment were evaluated. The results showed that the randomized fusion model under different iteration times and batch sizes always had the highest recognition rate, sensitivity, and specificity compared to the two single modal CNN models, but it also had longer training time. After CT images were used to quantitatively evaluate the changes of the patient's bronchus, it was found that the area of the upper and lower lung lobes of the affected side of COPD patients and the ratio of the area of the tube wall to the bronchus were significantly changed. The lung function, blood gas index, and SGRQ score of COPD-T patients were significantly improved compared with the COPD-C group (P < 0.05), but there was no considerable difference in AECOPD (P > 0.05). In summary, the randomized fusion-based CNN model can improve the recognition rate of COPD, and salmeterol/fluticasone combined with clarithromycin therapy can significantly improve the clinical treatment effect of COPD patients.
Collapse
|
32
|
DeepPrognosis: Preoperative prediction of pancreatic cancer survival and surgical margin via comprehensive understanding of dynamic contrast-enhanced CT imaging and tumor-vascular contact parsing. Med Image Anal 2021; 73:102150. [PMID: 34303891 DOI: 10.1016/j.media.2021.102150] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 05/08/2021] [Accepted: 06/24/2021] [Indexed: 12/15/2022]
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers and carries a dismal prognosis of ∼10% in five year survival rate. Surgery remains the best option of a potential cure for patients who are evaluated to be eligible for initial resection of PDAC. However, outcomes vary significantly even among the resected patients who were the same cancer stage and received similar treatments. Accurate quantitative preoperative prediction of primary resectable PDACs for personalized cancer treatment is thus highly desired. Nevertheless, there are a very few automated methods yet to fully exploit the contrast-enhanced computed tomography (CE-CT) imaging for PDAC prognosis assessment. CE-CT plays a critical role in PDAC staging and resectability evaluation. In this work, we propose a novel deep neural network model for the survival prediction of primary resectable PDAC patients, named as 3D Contrast-Enhanced Convolutional Long Short-Term Memory network (CE-ConvLSTM), which can derive the tumor attenuation signatures or patterns from patient CE-CT imaging studies. Tumor-vascular relationships, which might indicate the resection margin status, have also been proven to hold strong relationships with the overall survival of PDAC patients. To capture such relationships, we propose a self-learning approach for automated pancreas and peripancreatic anatomy segmentation without requiring any annotations on our PDAC datasets. We then employ a multi-task convolutional neural network (CNN) to accomplish both tasks of survival outcome and margin prediction where the network benefits from learning the resection margin related image features to improve the survival prediction. Our presented framework can improve overall survival prediction performances compared with existing state-of-the-art survival analysis approaches. The new staging biomarker integrating both the proposed risk signature and margin prediction has evidently added values to be combined with the current clinical staging system.
Collapse
|
33
|
Weakly supervised deep learning for determining the prognostic value of 18F-FDG PET/CT in extranodal natural killer/T cell lymphoma, nasal type. Eur J Nucl Med Mol Imaging 2021; 48:3151-3161. [PMID: 33611614 PMCID: PMC7896833 DOI: 10.1007/s00259-021-05232-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 02/01/2021] [Indexed: 12/22/2022]
Abstract
Purpose To develop a weakly supervised deep learning (WSDL) method that could utilize incomplete/missing survival data to predict the prognosis of extranodal natural killer/T cell lymphoma, nasal type (ENKTL) based on pretreatment 18F-FDG PET/CT results. Methods One hundred and sixty-seven patients with ENKTL who underwent pretreatment 18F-FDG PET/CT were retrospectively collected. Eighty-four patients were followed up for at least 2 years (training set = 64, test set = 20). A WSDL method was developed to enable the integration of the remaining 83 patients with incomplete/missing follow-up information in the training set. To test generalization, these data were derived from three types of scanners. Prediction similarity index (PSI) was derived from deep learning features of images. Its discriminative ability was calculated and compared with that of a conventional deep learning (CDL) method. Univariate and multivariate analyses helped explore the significance of PSI and clinical features. Results PSI achieved area under the curve scores of 0.9858 and 0.9946 (training set) and 0.8750 and 0.7344 (test set) in the prediction of progression-free survival (PFS) with the WSDL and CDL methods, respectively. PSI threshold of 1.0 could significantly differentiate the prognosis. In the test set, WSDL and CDL achieved prediction sensitivity, specificity, and accuracy of 87.50% and 62.50%, 83.33% and 83.33%, and 85.00% and 75.00%, respectively. Multivariate analysis confirmed PSI to be an independent significant predictor of PFS in both the methods. Conclusion The WSDL-based framework was more effective for extracting 18F-FDG PET/CT features and predicting the prognosis of ENKTL than the CDL method. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05232-3.
Collapse
|
34
|
Ratnam NM, Frederico SC, Gonzalez JA, Gilbert MR. Clinical correlates for immune checkpoint therapy: significance for CNS malignancies. Neurooncol Adv 2021; 3:vdaa161. [PMID: 33506203 PMCID: PMC7813206 DOI: 10.1093/noajnl/vdaa161] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Immune checkpoint inhibitors (ICIs) have revolutionized the field of cancer immunotherapy. Most commonly, inhibitors of PD-1 and CTLA4 are used having received approval for the treatment of many cancers like melanoma, non-small-cell lung carcinoma, and leukemia. In contrast, to date, clinical studies conducted in patients with CNS malignancies have not demonstrated promising results. However, patients with CNS malignancies have several underlying factors such as treatment with supportive medications like corticosteroids and cancer therapies including radiation and chemotherapy that may negatively impact response to ICIs. Although many clinical trials have been conducted with ICIs, measures that reproducibly and reliably indicate that treatment has evoked an effective immune response have not been fully developed. In this article, we will review the history of ICI therapy and the correlative biology that has been performed in the clinical trials testing these therapies in different cancers. It is our aim to help provide an overview of the assays that may be used to gauge immunologic response. This may be particularly germane for CNS tumors, where there is currently a great need for predictive biomarkers that will allow for the selection of patients with the highest likelihood of responding.
Collapse
Affiliation(s)
- Nivedita M Ratnam
- Neuro-Oncology Branch, CCR, NCI, National Institutes of Health, Bethesda, Maryland, USA
| | - Stephen C Frederico
- Neuro-Oncology Branch, CCR, NCI, National Institutes of Health, Bethesda, Maryland, USA
| | - Javier A Gonzalez
- Neuro-Oncology Branch, CCR, NCI, National Institutes of Health, Bethesda, Maryland, USA
| | - Mark R Gilbert
- Neuro-Oncology Branch, CCR, NCI, National Institutes of Health, Bethesda, Maryland, USA
| |
Collapse
|
35
|
陈 雯, 王 旭, 段 辉, 张 小, 董 婷, 聂 生. [Application of deep learning in cancer prognosis prediction model]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2020; 37:918-929. [PMID: 33140618 PMCID: PMC10320539 DOI: 10.7507/1001-5515.201909066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Indexed: 11/03/2022]
Abstract
In recent years, deep learning has provided a new method for cancer prognosis analysis. The literatures related to the application of deep learning in the prognosis of cancer are summarized and their advantages and disadvantages are analyzed, which can be provided for in-depth research. Based on this, this paper systematically reviewed the latest research progress of deep learning in the construction of cancer prognosis model, and made an analysis on the strengths and weaknesses of relevant methods. Firstly, the construction idea and performance evaluation index of deep learning cancer prognosis model were clarified. Secondly, the basic network structure was introduced, and the data type, data amount, and specific network structures and their merits and demerits were discussed. Then, the mainstream method of establishing deep learning cancer prognosis model was verified and the experimental results were analyzed. Finally, the challenges and future research directions in this field were summarized and expected. Compared with the previous models, the deep learning cancer prognosis model can better improve the prognosis prediction ability of cancer patients. In the future, we should continue to explore the research of deep learning in cancer recurrence rate, cancer treatment program and drug efficacy evaluation, and fully explore the application value and potential of deep learning in cancer prognosis model, so as to establish an efficient and accurate cancer prognosis model and realize the goal of precision medicine.
Collapse
Affiliation(s)
- 雯 陈
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 旭 王
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 辉宏 段
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 小兵 张
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 婷 董
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 生东 聂
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| |
Collapse
|
36
|
Lu Y, Patel M, Natarajan K, Ughratdar I, Sanghera P, Jena R, Watts C, Sawlani V. Machine learning-based radiomic, clinical and semantic feature analysis for predicting overall survival and MGMT promoter methylation status in patients with glioblastoma. Magn Reson Imaging 2020; 74:161-170. [PMID: 32980505 DOI: 10.1016/j.mri.2020.09.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 08/27/2020] [Accepted: 09/08/2020] [Indexed: 12/25/2022]
Abstract
INTRODUCTION Survival varies in patients with glioblastoma due to intratumoral heterogeneity and radiomics/imaging biomarkers have potential to demonstrate heterogeneity. The objective was to combine radiomic, semantic and clinical features to improve prediction of overall survival (OS) and O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status from pre-operative MRI in patients with glioblastoma. METHODS A retrospective study of 181 MRI studies (mean age 58 ± 13 years, mean OS 497 ± 354 days) performed in patients with histopathology-proven glioblastoma. Tumour mass, contrast-enhancement and necrosis were segmented from volumetric contrast-enhanced T1-weighted imaging (CE-T1WI). 333 radiomic features were extracted and 16 Visually Accessible Rembrandt Images (VASARI) features were evaluated by two experienced neuroradiologists. Top radiomic, VASARI and clinical features were used to build machine learning models to predict MGMT status, and all features including MGMT status were used to build Cox proportional hazards regression (Cox) and random survival forest (RSF) models for OS prediction. RESULTS The optimal cut-off value for MGMT promoter methylation index was 12.75%; 42 radiomic features exhibited significant differences between high and low-methylation groups. However, model performance accuracy combining radiomic, VASARI and clinical features for MGMT status prediction varied between 45 and 67%. For OS predication, the RSF model based on clinical, VASARI and CE radiomic features achieved the best performance with an average iAUC of 96.2 ± 1.7 and C-index of 90.0 ± 0.3. CONCLUSIONS VASARI features in combination with clinical and radiomic features from the enhancing tumour show promise for predicting OS with a high accuracy in patients with glioblastoma from pre-operative volumetric CE-T1WI.
Collapse
Affiliation(s)
- Yiping Lu
- Neuroradiology, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK; Radiology, Huashan Hospital, Fudan University, Wulumuqi Middle Road, Shanghai, China
| | - Markand Patel
- Neuroradiology, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK; University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Kal Natarajan
- Medical Physics, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK
| | - Ismail Ughratdar
- Neurosurgery, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK
| | - Paul Sanghera
- Clinical Oncology, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK
| | - Raj Jena
- Oncology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Hills Road, Cambridge CB2 0QQ, UK
| | - Colin Watts
- University of Birmingham, Edgbaston, Birmingham B15 2TT, UK; Neurosurgery, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK
| | - Vijay Sawlani
- Neuroradiology, Queen Elizabeth Hospital Birmingham, University Hospitals Birmingham NHS Foundation Trust, Mindelsohn Way, Edgbaston, Birmingham B15 2TH, UK; University of Birmingham, Edgbaston, Birmingham B15 2TT, UK.
| |
Collapse
|
37
|
Yoon HG, Cheon W, Jeong SW, Kim HS, Kim K, Nam H, Han Y, Lim DH. Multi-Parametric Deep Learning Model for Prediction of Overall Survival after Postoperative Concurrent Chemoradiotherapy in Glioblastoma Patients. Cancers (Basel) 2020; 12:cancers12082284. [PMID: 32823939 PMCID: PMC7465791 DOI: 10.3390/cancers12082284] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 08/07/2020] [Accepted: 08/10/2020] [Indexed: 12/24/2022] Open
Abstract
This study aimed to investigate the performance of a deep learning-based survival-prediction model, which predicts the overall survival (OS) time of glioblastoma patients who have received surgery followed by concurrent chemoradiotherapy (CCRT). The medical records of glioblastoma patients who had received surgery and CCRT between January 2011 and December 2017 were retrospectively reviewed. Based on our inclusion criteria, 118 patients were selected and semi-randomly allocated to training and test datasets (3:1 ratio, respectively). A convolutional neural network–based deep learning model was trained with magnetic resonance imaging (MRI) data and clinical profiles to predict OS. The MRI was reconstructed by using four pulse sequences (22 slices) and nine images were selected based on the longest slice of glioblastoma by a physician for each pulse sequence. The clinical profiles consist of personal, genetic, and treatment factors. The concordance index (C-index) and integrated area under the curve (iAUC) of the time-dependent area-under-the-curve curves of each model were calculated to evaluate the performance of the survival-prediction models. The model that incorporated clinical and radiomic features showed a higher C-index (0.768 (95% confidence interval (CI): 0.759, 0.776)) and iAUC (0.790 (95% CI: 0.783, 0.797)) than the model using clinical features alone (C-index = 0.693 (95% CI: 0.685, 0.701); iAUC = 0.723 (95% CI: 0.716, 0.731)) and the model using radiomic features alone (C-index = 0.590 (95% CI: 0.579, 0.600); iAUC = 0.614 (95% CI: 0.607, 0.621)). These improvements to the C-indexes and iAUCs were validated using the 1000-times bootstrapping method; all were statistically significant (p < 0.001). This study suggests the synergistic benefits of using both clinical and radiomic parameters. Furthermore, it indicates the potential of multi-parametric deep learning models for the survival prediction of glioblastoma patients.
Collapse
Affiliation(s)
- Han Gyul Yoon
- Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
| | - Wonjoong Cheon
- Samsung Advanced Institute for Health Science & Technology (SAIHST), Sungkyunkwan University School of Medicine, Seoul 06351, Korea; (W.C.); (S.W.J.)
- Proton Therapy Center, National Cancer Center, Goyang 10408, Korea
| | - Sang Woon Jeong
- Samsung Advanced Institute for Health Science & Technology (SAIHST), Sungkyunkwan University School of Medicine, Seoul 06351, Korea; (W.C.); (S.W.J.)
| | - Hye Seung Kim
- Statistics and Data Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Korea; (H.S.K.); (K.K.)
| | - Kyunga Kim
- Statistics and Data Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Korea; (H.S.K.); (K.K.)
| | - Heerim Nam
- Department of Radiation Oncology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul 03181, Korea;
| | - Youngyih Han
- Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
- Samsung Advanced Institute for Health Science & Technology (SAIHST), Sungkyunkwan University School of Medicine, Seoul 06351, Korea; (W.C.); (S.W.J.)
- Correspondence: (Y.H.); (D.H.L.); Tel.: +82-2-3410-2612 (D.H.L.)
| | - Do Hoon Lim
- Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
- Correspondence: (Y.H.); (D.H.L.); Tel.: +82-2-3410-2612 (D.H.L.)
| |
Collapse
|