1
|
Yu X, Zhou J, Wu Y, Bai Y, Meng N, Wu Q, Jin S, Liu H, Li P, Wang M. Assessment of MGMT promoter methylation status in glioblastoma using deep learning features from multi-sequence MRI of intratumoral and peritumoral regions. Cancer Imaging 2024; 24:172. [PMID: 39716317 DOI: 10.1186/s40644-024-00817-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Accepted: 12/16/2024] [Indexed: 12/25/2024] Open
Abstract
OBJECTIVE This study aims to evaluate the effectiveness of deep learning features derived from multi-sequence magnetic resonance imaging (MRI) in determining the O6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status among glioblastoma patients. METHODS Clinical, pathological, and MRI data of 356 glioblastoma patients (251 methylated, 105 unmethylated) were retrospectively examined from the public dataset The Cancer Imaging Archive. Each patient underwent preoperative multi-sequence brain MRI scans, which included T1-weighted imaging (T1WI) and contrast-enhanced T1-weighted imaging (CE-T1WI). Regions of interest (ROIs) were delineated to identify the necrotic tumor core (NCR), enhancing tumor (ET), and peritumoral edema (PED). The ET and NCR regions were categorized as intratumoral ROIs, whereas the PED region was categorized as peritumoral ROIs. Predictive models were developed using the Transformer algorithm based on intratumoral, peritumoral, and combined MRI features. The area under the receiver operating characteristic curve (AUC) was employed to assess predictive performance. RESULTS The ROI-based models of intratumoral and peritumoral regions, utilizing deep learning algorithms on multi-sequence MRI, were capable of predicting MGMT promoter methylation status in glioblastoma patients. The combined model of intratumoral and peritumoral regions exhibited superior diagnostic performance relative to individual models, achieving an AUC of 0.923 (95% confidence interval [CI]: 0.890 - 0.948) in stratified cross-validation, with sensitivity and specificity of 86.45% and 87.62%, respectively. CONCLUSION The deep learning model based on MRI data can effectively distinguish between glioblastoma patients with and without MGMT promoter methylation.
Collapse
Affiliation(s)
- Xuan Yu
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
| | - Jing Zhou
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
| | - Yaping Wu
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
- Biomedical Research Institute, Henan Academy of Sciences, Zhengzhou, China
- Key Laboratory of Science and Engineering for the Multi-modal Prevention and Control of Major Chronic Diseases, Ministry of Industry and Information Technology, Zhengzhou, China
| | - Yan Bai
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
- Biomedical Research Institute, Henan Academy of Sciences, Zhengzhou, China
- Key Laboratory of Science and Engineering for the Multi-modal Prevention and Control of Major Chronic Diseases, Ministry of Industry and Information Technology, Zhengzhou, China
| | - Nan Meng
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
| | - Qingxia Wu
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
| | - Shuting Jin
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Huanhuan Liu
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
| | - Panlong Li
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China
| | - Meiyun Wang
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, 7 Weiwu Road, Zhengzhou, 450000, PR China.
- Biomedical Research Institute, Henan Academy of Sciences, Zhengzhou, China.
| |
Collapse
|
2
|
Gao M, Cheng J, Qiu A, Zhao D, Wang J, Liu J. Magnetic resonance imaging (MRI)-based intratumoral and peritumoral radiomics for prognosis prediction in glioma patients. Clin Radiol 2024; 79:e1383-e1393. [PMID: 39218720 DOI: 10.1016/j.crad.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 06/30/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024]
Abstract
AIM The purpose of this study was to identify robust radiological features from intratumoral and peritumoral regions, evaluate MRI protocols, and machine learning methods for overall survival stratification of glioma patients, and explore the relationship between radiological features and the tumour microenvironment. MATERIAL AND METHODS A retrospective analysis was conducted on 163 glioma patients, divided into a training set (n=113) and a testing set (n=50). For each patient, 2135 features were extracted from clinical MRI. Feature selection was performed using the Minimum Redundancy Maximum Relevance method and the Random Forest (RF) algorithm. Prognostic factors were assessed using the Cox proportional hazards model. Four machine learning models (RF, Logistic Regression, Support Vector Machine, and XGBoost) were trained on clinical and radiological features from tumour and peritumoral regions. Model evaluations on the testing set used receiver operating characteristic curves. RESULTS Among the 163 patients, 96 had an overall survival (OS) of less than three years postsurgery, while 67 had an OS of more than three years. Univariate Cox regression in the validation set indicated that age (p=0.003) and tumour grade (p<0.001) were positively associated with the risk of death within three years postsurgery. The final predictive model incorporated 13 radiological and 7 clinical features. The RF model, combining intratumor and peritumor radiomics, achieved the best predictive performance (AUC = 0.91; ACC = 0.86), outperforming single-region models. CONCLUSION Combined intratumoral and peritumoral radiomics can improve survival prediction and have potential as a practical imaging biomarker to guide clinical decision-making.
Collapse
Affiliation(s)
- M Gao
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - J Cheng
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, China; Institute of Guizhou Aerospace Measuring and Testing Technology, Guiyang, China
| | - A Qiu
- Department of Biomedical Engineering, The Johns Hopkins University, MD, USA; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - D Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
| | - J Wang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, China.
| | - J Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China; Department of Radiology Quality Control Center, Changsha, China.
| |
Collapse
|
3
|
Lin J, Su CQ, Tang WT, Xia ZW, Lu SS, Hong XN. Radiomic features on multiparametric MRI for differentiating pseudoprogression from recurrence in high-grade gliomas. Acta Radiol 2024; 65:1390-1400. [PMID: 39380365 DOI: 10.1177/02841851241283781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
BACKGROUND Distinguishing between tumor recurrence and pseudoprogression (PsP) in high-grade glioma postoperatively is challenging. This study aims to enhance this differentiation using a combination of intratumoral and peritumoral radiomics. PURPOSE To assess the effectiveness of intratumoral and peritumoral radiomics in improving the differentiation between high-grade glioma recurrence and pseudoprogression after surgery. MATERIAL AND METHODS A total of 109 cases were randomly divided into training and validation sets, with 1316 features extracted from intratumoral and peritumoral volumes of interest (VOIs) on conventional magnetic resonance imaging (MRI) and apparent diffusion coefficient (ADC) maps. Feature selection was performed using the mRMR algorithm, resulting in intratumoral (100 features), peritumoral (100 features), and combined (200 features) subsets. Optimal features were then selected using PCC and RFE algorithms and modeled using LR, SVM, and LDA classifiers. Diagnostic performance was compared using area under the receiver operating characteristic curve (AUC), evaluated in the validation set. A nomogram was established using radscores from intratumoral, peritumoral, and combined models. RESULTS The combined model, utilizing 14 optimal features (8 peritumoral, 6 intratumoral) and LR as the best classifier, outperformed the single intratumoral and peritumoral models. In the training set, the AUC values for the combined model, intratumoral model, and peritumoral model were 0.938, 0.921, and 0.847, respectively; in the validation set, the AUC values were 0.841, 0.755, and 0.705. The nomogram model demonstrated AUCs of 0.960 (training set) and 0.850 (validation set). CONCLUSION The combination of intratumoral and peritumoral radiomics is effective in distinguishing high-grade glioma recurrence from pseudoprogression after surgery.
Collapse
Affiliation(s)
- Jie Lin
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, PR China
| | - Chun-Qiu Su
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, PR China
| | - Wen-Tian Tang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, PR China
| | - Zhi-Wei Xia
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, PR China
| | - Shan-Shan Lu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, PR China
| | - Xun-Ning Hong
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, PR China
| |
Collapse
|
4
|
Cao S, Hu Z, Xie X, Wang Y, Yu J, Yang B, Shi Z, Wu G. Integrated diagnosis of glioma based on magnetic resonance images with incomplete ground truth labels. Comput Biol Med 2024; 180:108968. [PMID: 39106670 DOI: 10.1016/j.compbiomed.2024.108968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 07/26/2024] [Accepted: 07/29/2024] [Indexed: 08/09/2024]
Abstract
BACKGROUND Since the 2016 WHO guidelines, glioma diagnosis has entered an era of integrated diagnosis, combining tissue pathology and molecular pathology. The WHO has focused on promoting the application of molecular diagnosis in the classification of central nervous system tumors. Genetic information such as IDH1 and 1p/19q are important molecular markers, and pathological grading is also a key clinical indicator. However, obtaining genetic pathology labels is more costly than conventional MRI images, resulting in a large number of missing labels in realistic modeling. METHOD We propose a training strategy based on label encoding and a corresponding loss function to enable the model to effectively utilize data with missing labels. Additionally, we integrate a graph model with genes and pathology-related clinical prior knowledge into the ResNet backbone to further improve the efficacy of diagnosis. Ten-fold cross-validation experiments were conducted on a large dataset of 1072 patients. RESULTS The classification area under the curve (AUC) values are 0.93, 0.91, and 0.90 for IDH1, 1p/19q status, and grade (LGG/HGG), respectively. When the label miss rate reached 59.3 %, the method improved the AUC by 0.09, 0.10, and 0.04 for IDH1, 1p/19q, and pathological grade, respectively, compared to the same backbone without the missing label strategy. CONCLUSIONS Our method effectively utilizes data with missing labels and integrates clinical prior knowledge, resulting in improved diagnostic performance for glioma genetic and pathological markers, even with high rates of missing labels.
Collapse
Affiliation(s)
- Shiwen Cao
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Zhaoyu Hu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xuan Xie
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yuanyuan Wang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Bojie Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China.
| | - Zhifeng Shi
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China.
| | - Guoqing Wu
- School of Information Science and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
5
|
Lv Y, Liu J, Tian X, Yang P, Pan Y. CFINet: Cross-Modality MRI Feature Interaction Network for Pseudoprogression Prediction of Glioblastoma. J Comput Biol 2024. [PMID: 38975725 DOI: 10.1089/cmb.2024.0518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/09/2024] Open
Abstract
Pseudoprogression (PSP) is a related reaction of glioblastoma treatment, and misdiagnosis can lead to unnecessary intervention. Magnetic resonance imaging (MRI) provides cross-modality images for PSP prediction studies. However, how to effectively use the complementary information between the cross-modality MRI to improve PSP prediction is still a challenging task. To address this challenge, we propose a cross-modality feature interaction network for PSP prediction. Firstly, we propose a triple-branch multi-scale module to extract low-order feature representations and a skip-connection multi-scale module to extract high-order feature representations. Then, a cross-modality interaction module based on attention mechanism is designed to make the complementary information between cross-modality MRI fully interact. Finally, the high-order cross-modality interaction information is fed into a multi-layer perceptron to achieve the PSP prediction task. We evaluate the proposed network on a private dataset with 52 subjects from Hunan Cancer Hospital and validate it on a private dataset with 30 subjects from Xiangya Hospital. The accuracy of our proposed network on the datasets is 0.954 and 0.929, respectively, which is better than most typical convolutional neural network and interaction methods.
Collapse
Affiliation(s)
- Ya Lv
- Xinjiang Engineering Research Center of Big Data and Intelligent Software, School of Software, Xinjiang University, Wulumuqi, China
| | - Jin Liu
- Xinjiang Engineering Research Center of Big Data and Intelligent Software, School of Software, Xinjiang University, Wulumuqi, China
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, China
| | - Xu Tian
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, China
| | - Pei Yang
- Radiation Oncology Department, Hunan Cancer Hospital, Changsha, China
| | - Yi Pan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
6
|
Tan R, Sui C, Wang C, Zhu T. MRI-based intratumoral and peritumoral radiomics for preoperative prediction of glioma grade: a multicenter study. Front Oncol 2024; 14:1401977. [PMID: 38803534 PMCID: PMC11128562 DOI: 10.3389/fonc.2024.1401977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Accepted: 04/29/2024] [Indexed: 05/29/2024] Open
Abstract
Background Accurate preoperative prediction of glioma is crucial for developing individualized treatment decisions and assessing prognosis. In this study, we aimed to establish and evaluate the value of integrated models by incorporating the intratumoral and peritumoral features from conventional MRI and clinical characteristics in the prediction of glioma grade. Methods A total of 213 glioma patients from two centers were included in the retrospective analysis, among which, 132 patients were classified as the training cohort and internal validation set, and the remaining 81 patients were zoned as the independent external testing cohort. A total of 7728 features were extracted from MRI sequences and various volumes of interest (VOIs). After feature selection, 30 radiomic models depended on five sets of machine learning classifiers, different MRI sequences, and four different combinations of predictive feature sources, including features from the intratumoral region only, features from the peritumoral edema region only, features from the fusion area including intratumoral and peritumoral edema region (VOI-fusion), and features from the intratumoral region with the addition of features from peritumoral edema region (feature-fusion), were established to select the optimal model. A nomogram based on the clinical parameter and optimal radiomic model was constructed for predicting glioma grade in clinical practice. Results The intratumoral radiomic models based on contrast-enhanced T1-weighted and T2-flair sequences outperformed those based on a single MRI sequence. Moreover, the internal validation and independent external test underscored that the XGBoost machine learning classifier, incorporating features extracted from VOI-fusion, showed superior predictive efficiency in differentiating between low-grade gliomas (LGG) and high-grade gliomas (HGG), with an AUC of 0.805 in the external test. The radiomic models of VOI-fusion yielded higher prediction efficiency than those of feature-fusion. Additionally, the developed nomogram presented an optimal predictive efficacy with an AUC of 0.825 in the testing cohort. Conclusion This study systematically investigated the effect of intratumoral and peritumoral radiomics to predict glioma grading with conventional MRI. The optimal model was the XGBoost classifier coupled radiomic model based on VOI-fusion. The radiomic models that depended on VOI-fusion outperformed those that depended on feature-fusion, suggesting that peritumoral features should be rationally utilized in radiomic studies.
Collapse
Affiliation(s)
- Rui Tan
- Department of Neurosurgery, Tianjin Medical University General Hospital, Tianjin, China
| | - Chunxiao Sui
- Department of Molecular Imaging and Nuclear Medicine, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Chao Wang
- Department of Neurosurgery, Qilu Hospital of Shandong University Dezhou Hospital (Dezhou People’s Hospital), Shandong, China
| | - Tao Zhu
- Department of Neurosurgery, Tianjin Medical University General Hospital, Tianjin, China
| |
Collapse
|
7
|
Guo R, Tian X, Lin H, McKenna S, Li HD, Guo F, Liu J. Graph-Based Fusion of Imaging, Genetic and Clinical Data for Degenerative Disease Diagnosis. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:57-68. [PMID: 37991907 DOI: 10.1109/tcbb.2023.3335369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
Graph learning methods have achieved noteworthy performance in disease diagnosis due to their ability to represent unstructured information such as inter-subject relationships. While it has been shown that imaging, genetic and clinical data are crucial for degenerative disease diagnosis, existing methods rarely consider how best to use their relationships. How best to utilize information from imaging, genetic and clinical data remains a challenging problem. This study proposes a novel graph-based fusion (GBF) approach to meet this challenge. To extract effective imaging-genetic features, we propose an imaging-genetic fusion module which uses an attention mechanism to obtain modality-specific and joint representations within and between imaging and genetic data. Then, considering the effectiveness of clinical information for diagnosing degenerative diseases, we propose a multi-graph fusion module to further fuse imaging-genetic and clinical features, which adopts a learnable graph construction strategy and a graph ensemble method. Experimental results on two benchmarks for degenerative disease diagnosis (Alzheimers Disease Neuroimaging Initiative and Parkinson's Progression Markers Initiative) demonstrate its effectiveness compared to state-of-the-art graph-based methods. Our findings should help guide further development of graph-based models for dealing with imaging, genetic and clinical data.
Collapse
|
8
|
Automated Collateral Scoring on CT Angiography of Patients with Acute Ischemic Stroke Using Hybrid CNN and Transformer Network. Biomedicines 2023; 11:biomedicines11020243. [PMID: 36830780 PMCID: PMC9953344 DOI: 10.3390/biomedicines11020243] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 01/10/2023] [Accepted: 01/14/2023] [Indexed: 01/20/2023] Open
Abstract
Collateral scoring plays an important role in diagnosis and treatment decisions of acute ischemic stroke (AIS). Most existing automated methods rely on vessel prominence and amount after vessel segmentation. The purpose of this study was to design a vessel-segmentation free method for automating collateral scoring on CT angiography (CTA). We first processed the original CTA via maximum intensity projection (MIP) and middle cerebral artery (MCA) region segmentation. The obtained MIP images were fed into our proposed hybrid CNN and Transformer model (MPViT) to automatically determine the collateral scores. We collected 154 CTA scans of patients with AIS for evaluation using five-folder cross validation. Results show that the proposed MPViT achieved an intraclass correlation coefficient of 0.767 (95% CI: 0.68-0.83) and a Kappa of 0.6184 (95% CI: 0.4954-0.7414) for three-point collateral score classification. For dichotomized classification (good vs. non-good and poor vs. non-poor), it also achieved great performance.
Collapse
|
9
|
Cheng J, Zhao W, Liu J, Xie X, Wu S, Liu L, Yue H, Li J, Wang J, Liu J. Automated Diagnosis of COVID-19 Using Deep Supervised Autoencoder With Multi-View Features From CT Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2723-2736. [PMID: 34351863 PMCID: PMC9647725 DOI: 10.1109/tcbb.2021.3102584] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Accurate and rapid diagnosis of coronavirus disease 2019 (COVID-19) from chest CT scans is of great importance and urgency during the worldwide outbreak. However, radiologists have to distinguish COVID-19 pneumonia from other pneumonia in a large number of CT scans, which is tedious and inefficient. Thus, it is urgently and clinically needed to develop an efficient and accurate diagnostic tool to help radiologists to fulfill the difficult task. In this study, we proposed a deep supervised autoencoder (DSAE) framework to automatically identify COVID-19 using multi-view features extracted from CT images. To fully explore features characterizing CT images from different frequency domains, DSAE was proposed to learn the latent representation by multi-task learning. The proposal was designed to both encode valuable information from different frequency features and construct a compact class structure for separability. To achieve this, we designed a multi-task loss function, which consists of a supervised loss and a reconstruction loss. Our proposed method was evaluated on a newly collected dataset of 787 subjects including COVID-19 pneumonia patients, other pneumonia patients, and normal subjects without abnormal CT findings. Extensive experimental results demonstrated that our proposed method achieved encouraging diagnostic performance and may have potential clinical application for the diagnosis of COVID-19.
Collapse
|
10
|
A Survey of Radiomics in Precision Diagnosis and Treatment of Adult Gliomas. J Clin Med 2022; 11:jcm11133802. [PMID: 35807084 PMCID: PMC9267404 DOI: 10.3390/jcm11133802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/18/2022] [Accepted: 06/29/2022] [Indexed: 02/04/2023] Open
Abstract
Glioma is the most common primary malignant tumor of the adult central nervous system (CNS), which mostly shows invasive growth. In most cases, surgery is often difficult to completely remove, and the recurrence rate and mortality of patients are high. With the continuous development of molecular genetics and the great progress of molecular biology technology, more and more molecular biomarkers have been proved to have important guiding significance in the individualized diagnosis, treatment, and prognosis evaluation of glioma. With the updates of the World Health Organization (WHO) classification of tumors of the CNS in 2021, the diagnosis and treatment of glioma has entered the era of precision medicine in the true sense. Due to its ability to non-invasively achieve accurate identification of glioma from other intracranial tumors, and to predict the grade, genotyping, treatment response, and prognosis of glioma, which provides a scientific basis for the clinical application of individualized diagnosis and treatment model of glioma, radiomics has become a research hotspot in the field of precision medicine. This paper reviewed the research related to radiomics of adult gliomas published in recent years and summarized the research proceedings of radiomics in differential diagnosis, preoperative grading and genotyping, treatment and efficacy evaluation, and survival prediction of adult gliomas.
Collapse
|
11
|
Cheng J, Liu J, Kuang H, Wang J. A Fully Automated Multimodal MRI-Based Multi-Task Learning for Glioma Segmentation and IDH Genotyping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1520-1532. [PMID: 35020590 DOI: 10.1109/tmi.2022.3142321] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The accurate prediction of isocitrate dehydrogenase (IDH) mutation and glioma segmentation are important tasks for computer-aided diagnosis using preoperative multimodal magnetic resonance imaging (MRI). The two tasks are ongoing challenges due to the significant inter-tumor and intra-tumor heterogeneity. The existing methods to address them are mostly based on single-task approaches without considering the correlation between the two tasks. In addition, the acquisition of IDH genetic labels is expensive and costly, resulting in a limited number of IDH mutation data for modeling. To comprehensively address these problems, we propose a fully automated multimodal MRI-based multi-task learning framework for simultaneous glioma segmentation and IDH genotyping. Specifically, the task correlation and heterogeneity are tackled with a hybrid CNN-Transformer encoder that consists of a convolutional neural network and a transformer to extract the shared spatial and global information learned from a decoder for glioma segmentation and a multi-scale classifier for IDH genotyping. Then, a multi-task learning loss is designed to balance the two tasks by combining the segmentation and classification loss functions with uncertain weights. Finally, an uncertainty-aware pseudo-label selection is proposed to generate IDH pseudo-labels from larger unlabeled data for improving the accuracy of IDH genotyping by using semi-supervised learning. We evaluate our method on a multi-institutional public dataset. Experimental results show that our proposed multi-task network achieves promising performance and outperforms the single-task learning counterparts and other existing state-of-the-art methods. With the introduction of unlabeled data, the semi-supervised multi-task learning framework further improves the performance of glioma segmentation and IDH genotyping. The source codes of our framework are publicly available at https://github.com/miacsu/MTTU-Net.git.
Collapse
|
12
|
A Semi-Unsupervised Segmentation Methodology Based on Texture Recognition for Radiomics: A Preliminary Study on Brain Tumours. ELECTRONICS 2022. [DOI: 10.3390/electronics11101573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Because of the intrinsic anatomic complexity of the brain structures, brain tumors have a high mortality and disability rate, and an early diagnosis is mandatory to contain damages. The commonly used biopsy is the diagnostic gold standard method, but it is invasive and, due to intratumoral heterogeneity, biopsies may lead to an incorrect result. Moreover, some tumors cannot be resectable if located in critical eloquent areas. On the other hand, medical imaging procedures can evaluate the entire tumor in a non-invasive and reproducible way. Radiomics is an emerging diagnosis technique based on quantitative medical image analyses, which makes use of data provided by non-invasive diagnosis techniques such as X-ray, computer-tomography (CT), magnetic resonance (MR), and proton emission tomography (PET). Radiomics techniques require the comprehensive analysis of huge numbers of medical images to extract a large and useful number of phenotypic features (usually called radiomics biomarkers). The goal is to explore and obtain the associations between features of tumors, diagnosis and patients’ prognoses to choose the best treatments and maximize the patient’s survival rate. Current radiomics techniques are not standardized in term of segmentation, feature extraction, and selection, moreover, the decision on suitable therapies still requires the supervision of an expert doctor. In this paper, we propose a semi-automatic methodology aimed to help the identification and segmentation of malignant tissues by using the combination of binary texture recognition, growing area algorithm, and machine learning techniques. In particular, the proposed method not only helps to better identify pathologic tissues but also permits to analyze in a fast way the huge amount of data, in Dicom format, provided by non-invasive diagnostic techniques. A preliminary experimental assessment has been conducted, considering a real MRI database of brain tumors. The method has been compared with the segmentation software’s tools “slicer 3D”. The obtained results are quite promising and demonstrate the potentialities of the proposed semi-unsupervised segmentation methodology.
Collapse
|
13
|
Yue H, Liu J, Li J, Kuang H, Lang J, Cheng J, Peng L, Han Y, Bai H, Wang Y, Wang Q, Wang J. MLDRL: Multi-loss disentangled representation learning for predicting esophageal cancer response to neoadjuvant chemoradiotherapy using longitudinal CT images. Med Image Anal 2022; 79:102423. [DOI: 10.1016/j.media.2022.102423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 03/08/2022] [Accepted: 03/12/2022] [Indexed: 12/24/2022]
|
14
|
Cheng J, Gao M, Liu J, Yue H, Kuang H, Liu J, Wang J. Multimodal Disentangled Variational Autoencoder with Game Theoretic Interpretability for Glioma grading. IEEE J Biomed Health Inform 2021; 26:673-684. [PMID: 34236971 DOI: 10.1109/jbhi.2021.3095476] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Effective fusion of multimodal magnetic resonance imaging (MRI) is of great significance to boost the accuracy of glioma grading thanks to the complementary information provided by different imaging modalities. However, how to extract the common and distinctive information from MRI to achieve complementarity is still an open problem in information fusion research. In this study, we propose a deep neural network model termed as multimodal disentangled variational autoencoder (MMD-VAE) for glioma grading based on radiomics features extracted from preoperative multimodal MRI images. Specifically, the radiomics features are quantized and extracted from the region of interest for each modality. Then, the latent representations of variational autoencoder for these features are disentangled into common and distinctive representations to obtain the shared and complementary data among modalities. Afterward, cross-modality reconstruction loss and common-distinctive loss are designed to ensure the effectiveness of the disentangled representations. Finally, the disentangled common and distinctive representations are fused to predict the glioma grades, and SHapley Additive exPlanations (SHAP) is adopted to quantitatively interpret and analyze the contribution of the important features to grading. Experimental results on two benchmark datasets demonstrate that the proposed MMD-VAE model achieves encouraging predictive performance (AUC:0.9939) on a public dataset, and good generalization performance (AUC:0.9611) on a cross-institutional private dataset. These quantitative results and interpretations may help radiologists understand gliomas better and make better treatment decisions for improving clinical outcomes.
Collapse
|
15
|
Improving Loanword Identification in Low-Resource Language with Data Augmentation and Multiple Feature Fusion. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:9975078. [PMID: 33927756 PMCID: PMC8049817 DOI: 10.1155/2021/9975078] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 03/18/2021] [Accepted: 03/25/2021] [Indexed: 11/23/2022]
Abstract
Loanword identification is studied in recent years to alleviate data sparseness in several natural language processing (NLP) tasks, such as machine translation, cross-lingual information retrieval, and so on. However, recent studies on this topic usually put efforts on high-resource languages (such as Chinese, English, and Russian); for low-resource languages, such as Uyghur and Mongolian, due to the limitation of resources and lack of annotated data, loanword identification on these languages tends to have lower performance. To overcome this problem, we first propose a lexical constraint-based data augmentation method to generate training data for low-resource language loanword identification; then, a loanword identification model based on a log-linear RNN is introduced to improve the performance of low-resource loanword identification by incorporating features such as word-level embeddings, character-level embeddings, pronunciation similarity, and part-of-speech (POS) into one model. Experimental results on loanword identification in Uyghur (in this study, we mainly focus on Arabic, Chinese, Russian, and Turkish loanwords in Uyghur) showed that our proposed method achieves best performance compared with several strong baseline systems.
Collapse
|