1
|
Yimit Y, Yasin P, Tuersun A, Wang J, Wang X, Huang C, Abudoubari S, Chen X, Ibrahim I, Nijiati P, Wang Y, Zou X, Nijiati M. Multiparametric MRI-Based Interpretable Radiomics Machine Learning Model Differentiates Medulloblastoma and Ependymoma in Children: A Two-Center Study. Acad Radiol 2024:S1076-6332(24)00131-4. [PMID: 38508934 DOI: 10.1016/j.acra.2024.02.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 02/23/2024] [Accepted: 02/24/2024] [Indexed: 03/22/2024]
Abstract
RATIONALE AND OBJECTIVES Medulloblastoma (MB) and Ependymoma (EM) in children, share similarities in age group, tumor location, and clinical presentation. Distinguishing between them through clinical diagnosis is challenging. This study aims to explore the effectiveness of using radiomics and machine learning on multiparametric magnetic resonance imaging (MRI) to differentiate between MB and EM and validate its diagnostic ability with an external set. MATERIALS AND METHODS Axial T2 weighted image (T2WI) and contrast-enhanced T1weighted image (CE-T1WI) MRI sequences of 135 patients from two centers were collected as train/test sets. Volume of interest (VOI) was manually delineated by an experienced neuroradiologist, supervised by a senior. Feature selection analysis and the least absolute shrinkage and selection operator (LASSO) algorithm identified valuable features, and Shapley additive explanations (SHAP) evaluated their significance. Five machine-learning classifiers-extreme gradient boosting (XGBoost), Bernoulli naive Bayes (Bernoulli NB), Logistic Regression (LR), support vector machine (SVM), linear support vector machine (Linear SVC) classifiers were built based on T2WI (T2 model), CE-T1WI (T1 model), and T1 + T2WI (T1 + T2 model). A human expert diagnosis was developed and corrected by senior radiologists. External validation was performed at Sun Yat-Sen University Cancer Center. RESULTS 31 valuable features were extracted from T2WI and CE-T1WI. XGBoost demonstrated the highest performance with an area under the curve (AUC) of 0.92 on the test set and maintained an AUC of 0.80 during external validation. For the T1 model, XGBoost achieved the highest AUC of 0.85 on the test set and the highest accuracy of 0.71 on the external validation set. In the T2 model, XGBoost achieved the highest AUC of 0.86 on the test set and the highest accuracy of 0.82 on the external validation set. The human expert diagnosis had an AUC of 0.66 on the test set and 0.69 on the external validation set. The integrated T1 + T2 model achieved an AUC of 0.92 on the test set, 0.80 on the external validation set, achieved the best performance. Overall, XGBoost consistently outperformed in different classification models. CONCLUSION The combination of radiomics and machine learning on multiparametric MRI effectively distinguishes between MB and EM in childhood, surpassing human expert diagnosis in training and testing sets.
Collapse
Affiliation(s)
- Yasen Yimit
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000; Xinjiang Key Laboratory of Artificial Intelligence assisted Imaging Diagnosis, Kashi (Kashgar), China, 844000
| | - Parhat Yasin
- Department of Spine Surgery, First Affiliated Hospital of Xinjiang Medical University, Urumqi, China, 830054
| | - Abudouresuli Tuersun
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000; Xinjiang Key Laboratory of Artificial Intelligence assisted Imaging Diagnosis, Kashi (Kashgar), China, 844000
| | - Jingru Wang
- Department of Research Collaboration, R&D center, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, PR China, 100080
| | - Xiaohong Wang
- Department of Radiology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China, 510630
| | - Chencui Huang
- Department of Research Collaboration, R&D center, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, PR China, 100080
| | - Saimaitikari Abudoubari
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000; Xinjiang Key Laboratory of Artificial Intelligence assisted Imaging Diagnosis, Kashi (Kashgar), China, 844000
| | - Xingzhi Chen
- Department of Research Collaboration, R&D center, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, PR China, 100080
| | - Irshat Ibrahim
- Department of General Surgery, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000
| | - Pahatijiang Nijiati
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000; Xinjiang Key Laboratory of Artificial Intelligence assisted Imaging Diagnosis, Kashi (Kashgar), China, 844000
| | - Yunling Wang
- Department of Imaging Center, First Affiliated Hospital of Xinjiang Medical University, Urumqi, China, 830054
| | - Xiaoguang Zou
- Xinjiang Key Laboratory of Artificial Intelligence assisted Imaging Diagnosis, Kashi (Kashgar), China, 844000; Clinical Medical Research Center, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000
| | - Mayidili Nijiati
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, Xinjiang, China, 844000; Xinjiang Key Laboratory of Artificial Intelligence assisted Imaging Diagnosis, Kashi (Kashgar), China, 844000.
| |
Collapse
|
2
|
Nijiati M, Guo L, Abulizi A, Fan S, Wubuli A, Tuersun A, Nijiati P, Xia L, Hong K, Zou X. Deep learning and radiomics of longitudinal CT scans for early prediction of tuberculosis treatment outcomes. Eur J Radiol 2023; 169:111180. [PMID: 37949023 DOI: 10.1016/j.ejrad.2023.111180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 10/21/2023] [Accepted: 10/29/2023] [Indexed: 11/12/2023]
Abstract
BACKGROUND To predict tuberculosis (TB) treatment outcomes at an early stage, prevent poor outcomes ofdrug-resistant tuberculosis(DR-TB) and interrupt transmission. METHODS An internal cohort for model development consists of 204 bacteriologically-confirmed TB patients who completed anti-tuberculosis treatment, with one pretreatment and two follow-up CT images (612 scans). Three radiomics feature-based models (RM) with multiple classifiers of Bagging, Random forest and Gradient boosting and two deep-learning-based models (i.e., supervised deep-learning model, SDLM; weakly supervised deep-learning model, WSDLM) are developed independently. Prediction scores of RM and deep-learning models with respectively highest performance are fused to create new fusion models under different fusion strategies. An additional independent validation was conducted on the external cohort comprising 80 patients (160 scans). RESULTS For RM scheme, 16 optimal radiomics features are finally selected using longitudinal scans. The AUCs of RM for Bagging, Random forest and Gradient boosting were 0.789, 0.773 and 0.764 in the internal cohort and 0.840, 0.834 and 0.816 in the external cohort, respectively. For deep learning-based scheme, AUCs of SDLM and WSDLM were 0.767 and 0.661 in the internal cohort, and 0.823 and 0.651 in the external. The fusion model yields AUCs from 0.767 to 0.802 in the internal cohort, and from 0.831 to 0.857 in the external cohort. CONCLUSIONS Fusion of radiomics features and deep-learning model may have the potential to predict early failure outcome of DR-TB, which may be combined to help prevent poor TB treatment outcomes.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, China
| | - Lin Guo
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | | | - Shiyu Fan
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, China
| | - Abulikemu Wubuli
- Department of Radiology, Yecheng County People's Hospital, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, China
| | - Pahatijiang Nijiati
- Department of Radiology, The First People's Hospital of Kashi (Kashgar) Prefecture, China
| | - Li Xia
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Kunlei Hong
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People's Hospital of Kashi (Kashgar) Prefecture, China.
| |
Collapse
|
3
|
Nijiati M, Guo L, Tuersun A, Damola M, Abulizi A, Dong J, Xia L, Hong K, Zou X. Deep learning on longitudinal CT scans: automated prediction of treatment outcomes in hospitalized tuberculosis patients. iScience 2023; 26:108326. [PMID: 37965132 PMCID: PMC10641748 DOI: 10.1016/j.isci.2023.108326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 08/17/2023] [Accepted: 10/20/2023] [Indexed: 11/16/2023] Open
Abstract
Three deep learning (DL)-based prediction models (PMs) using longitudinal CT images were developed to predict tuberculosis (TB) treatment outcomes. The internal dataset consists of 493 bacteriologically confirmed TB patients who completed the anti-tuberculosis treatment with three-time CT scans, including a pretreatment CT scan and two follow-up CT scans. PM1 was trained using only pretreatment CT scans, and PM2 and PM3 were developed by adding follow-up scans. An independent testing was performed on external dataset comprising 86 TB patients. The area under the curve for classifying success and drug-resistant (DR)-TB was improved on both internal (0.609 vs. 0.625 vs. 0.815) and external (0.627 vs. 0.705 vs. 0.735) dataset by adding follow-up scans. The accuracy and F1-score also showed an increasing tendency in the external test. Regular follow-up CT scans can aid in the treatment prediction, and special attention should be given to early intensive phase of treatment to identify high-risk DR-TB patients.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Lin Guo
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Maihemitijiang Damola
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | - Jiake Dong
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Li Xia
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Kunlei Hong
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People’s Hospital of Kashi Prefecture, Kashi, China
| |
Collapse
|
4
|
Nijiati M, Tuersun A, Zhang Y, Yuan Q, Gong P, Abulizi A, Tuoheti A, Abulaiti A, Zou X. A symmetric prior knowledge based deep learning model for intracerebral hemorrhage lesion segmentation. Front Physiol 2022; 13:977427. [PMID: 36505076 PMCID: PMC9727183 DOI: 10.3389/fphys.2022.977427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 11/11/2022] [Indexed: 11/24/2022] Open
Abstract
Background: Accurate localization and classification of intracerebral hemorrhage (ICH) lesions are of great significance for the treatment and prognosis of patients with ICH. The purpose of this study is to develop a symmetric prior knowledge based deep learning model to segment ICH lesions in computed tomography (CT). Methods: A novel symmetric Transformer network (Sym-TransNet) is designed to segment ICH lesions in CT images. A cohort of 1,157 patients diagnosed with ICH is established to train (n = 857), validate (n = 100), and test (n = 200) the Sym-TransNet. A healthy cohort of 200 subjects is added, establishing a test set with balanced positive and negative cases (n = 400), to further evaluate the accuracy, sensitivity, and specificity of the diagnosis of ICH. The segmentation results are obtained after data pre-processing and Sym-TransNet. The DICE coefficient is used to evaluate the similarity between the segmentation results and the segmentation gold standard. Furthermore, some recent deep learning methods are reproduced to compare with Sym-TransNet, and statistical analysis is performed to prove the statistical significance of the proposed method. Ablation experiments are conducted to prove that each component in Sym-TransNet could effectively improve the DICE coefficient of ICH lesions. Results: For the segmentation of ICH lesions, the DICE coefficient of Sym-TransNet is 0.716 ± 0.031 in the test set which contains 200 CT images of ICH. The DICE coefficients of five subtypes of ICH, including intraparenchymal hemorrhage (IPH), intraventricular hemorrhage (IVH), extradural hemorrhage (EDH), subdural hemorrhage (SDH), and subarachnoid hemorrhage (SAH), are 0.784 ± 0.039, 0.680 ± 0.049, 0.359 ± 0.186, 0.534 ± 0.455, and 0.337 ± 0.044, respectively. Statistical results show that the proposed Sym-TransNet can significantly improve the DICE coefficient of ICH lesions in most cases. In addition, the accuracy, sensitivity, and specificity of Sym-TransNet in the diagnosis of ICH in 400 CT images are 91.25%, 98.50%, and 84.00%, respectively. Conclusion: Compared with recent mainstream deep learning methods, the proposed Sym-TransNet can segment and identify different types of lesions from CT images of ICH patients more effectively. Moreover, the Sym-TransNet can diagnose ICH more stably and efficiently, which has clinical application prospects.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | - Qing Yuan
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | | | - Awanisa Tuoheti
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Adili Abulaiti
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China,*Correspondence: Adili Abulaiti, ; Xiaoguang Zou,
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People’s Hospital of Kashi Prefecture, Kashi, China,*Correspondence: Adili Abulaiti, ; Xiaoguang Zou,
| |
Collapse
|
5
|
Nijiati M, Ma J, Hu C, Tuersun A, Abulizi A, Kelimu A, Zhang D, Li G, Zou X. Artificial Intelligence Assisting the Early Detection of Active Pulmonary Tuberculosis From Chest X-Rays: A Population-Based Study. Front Mol Biosci 2022; 9:874475. [PMID: 35463963 PMCID: PMC9023793 DOI: 10.3389/fmolb.2022.874475] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 03/08/2022] [Indexed: 11/13/2022] Open
Abstract
As a major infectious disease, tuberculosis (TB) still poses a threat to people’s health in China. As a triage test for TB, reading chest radiography with traditional approach ends up with high inter-radiologist and intra-radiologist variability, moderate specificity and a waste of time and medical resources. Thus, this study established a deep convolutional neural network (DCNN) based artificial intelligence (AI) algorithm, aiming at diagnosing TB on posteroanterior chest X-ray photographs in an effective and accurate way. Altogether, 5,000 patients with TB and 4,628 patients without TB were included in the study, totaling to 9,628 chest X-ray photographs analyzed. Splitting the radiographs into a training set (80.4%) and a testing set (19.6%), three different DCNN algorithms, including ResNet, VGG, and AlexNet, were trained to classify the chest radiographs as images of pulmonary TB or without TB. Both the diagnostic accuracy and the area under the receiver operating characteristic curve were used to evaluate the performance of the three AI diagnosis models. Reaching an accuracy of 96.73% and marking the precise TB regions on the radiographs, ResNet algorithm-based AI outperformed the rest models and showed excellent diagnostic ability in different clinical subgroups in the stratification analysis. In summary, the ResNet algorithm-based AI diagnosis system provided accurate TB diagnosis, which could have broad prospects in clinical application for TB diagnosis, especially in poor regions with high TB incidence.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
- *Correspondence: Mayidili Nijiati, ; Guanbin Li, ; Xiaoguang Zou,
| | - Jie Ma
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Chuling Hu
- Department of Colorectal Surgery, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | - Abudoureyimu Kelimu
- Department of Radiology, Kashi Area Tuberculosis Control Center, Kashi, China
| | - Dongyu Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Guanbin Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
- *Correspondence: Mayidili Nijiati, ; Guanbin Li, ; Xiaoguang Zou,
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People’s Hospital of Kashi Prefecture, Kashi, China
- *Correspondence: Mayidili Nijiati, ; Guanbin Li, ; Xiaoguang Zou,
| |
Collapse
|