1
|
Mao Y, Jiang LP, Wang JL, Diao YH, Chen FQ, Zhang WP, Chen L, Liu ZX. Multi-feature Fusion Network on Gray Scale Ultrasonography: Effective Differentiation of Adenolymphoma and Pleomorphic Adenoma. Acad Radiol 2024:S1076-6332(24)00308-8. [PMID: 38871552 DOI: 10.1016/j.acra.2024.05.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/15/2024]
Abstract
RATIONALE AND OBJECTIVES to develop a deep learning radiomics graph network (DLRN) that integrates deep learning features extracted from gray scale ultrasonography, radiomics features and clinical features, for distinguishing parotid pleomorphic adenoma (PA) from adenolymphoma (AL) MATERIALS AND METHODS: A total of 287 patients (162 in training cohort, 70 in internal validation cohort and 55 in external validation cohort) from two centers with histologically confirmed PA or AL were enrolled. Deep transfer learning features and radiomics features extracted from gray scale ultrasound images were input to machine learning classifiers including logistic regression (LR), support vector machines (SVM), KNN, RandomForest (RF), ExtraTrees, XGBoost, LightGBM, and MLP to construct deep transfer learning radiomics (DTL) models and Rad models respectively. Deep learning radiomics (DLR) models were constructed by integrating the two features and DLR signatures were generated. Clinical features were further combined with the signatures to develop a DLRN model. The performance of these models was evaluated using receiver operating characteristic (ROC) curve analysis, calibration, decision curve analysis (DCA), and the Hosmer-Lemeshow test. RESULTS In the internal validation cohort and external validation cohort, comparing to Clinic (AUC=0.767 and 0.777), Rad (AUC=0.841 and 0.748), DTL (AUC=0.740 and 0.825) and DLR (AUC=0.863 and 0.859), the DLRN model showed greatest discriminatory ability (AUC=0.908 and 0.908) showed optimal discriminatory ability. CONCLUSION The DLRN model built based on gray scale ultrasonography significantly improved the diagnostic performance for benign salivary gland tumors. It can provide clinicians with a non-invasive and accurate diagnostic approach, which holds important clinical significance and value. Ensemble of multiple models helped alleviate overfitting on the small dataset compared to using Resnet50 alone.
Collapse
Affiliation(s)
- Yi Mao
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Li-Ping Jiang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Jing-Ling Wang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Yu-Hong Diao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China.
| | - Fang-Qun Chen
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Wei-Ping Zhang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Li Chen
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Zhi-Xing Liu
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China; Department of Ultrasonography, GanJiang New District Peoples Hospital, Nanchang, China.
| |
Collapse
|
2
|
Mao Y, Jiang L, Wang JL, Chen FQ, Zhang WP, Liu ZX, Li C. Radiomic nomogram for discriminating parotid pleomorphic adenoma from parotid adenolymphoma based on grayscale ultrasonography. Front Oncol 2024; 13:1268789. [PMID: 38273852 PMCID: PMC10808803 DOI: 10.3389/fonc.2023.1268789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 12/18/2023] [Indexed: 01/27/2024] Open
Abstract
Objectives To differentiate parotid pleomorphic adenoma (PA) from adenolymphoma (AL) using radiomics of grayscale ultrasonography in combination with clinical features. Methods This retrospective study aimed to analyze the clinical and radiographic characteristics of 162 cases from December 2019 to March 2023. The study population consisted of a training cohort of 113 patients and a validation cohort of 49 patients. Grayscale ultrasonography was processed using ITP-Snap software and Python to delineate regions of interest (ROIs) and extract radiomic features. Univariate analysis, Spearman's correlation, greedy recursive elimination strategy, and least absolute shrinkage and selection operator (LASSO) correlation were employed to select relevant radiographic features. Subsequently, eight machine learning methods (LR, SVM, KNN, RandomForest, ExtraTrees, XGBoost, LightGBM, and MLP) were employed to build a quantitative radiomic model using the selected features. A radiomic nomogram was developed through the utilization of multivariate logistic regression analysis, integrating both clinical and radiomic data. The accuracy of the nomogram was assessed using receiver operating characteristic (ROC) curve analysis, calibration, decision curve analysis (DCA), and the Hosmer-Lemeshow test. Results To differentiate PA from AL, the radiomic model using SVM showed optimal discriminatory ability (accuracy = 0.929 and 0.857, sensitivity = 0.946 and 0.800, specificity = 0.921 and 0.897, positive predictive value = 0.854 and 0.842, and negative predictive value = 0.972 and 0.867 in the training and validation cohorts, respectively). A nomogram incorporating rad-Signature and clinical features achieved an area under the ROC curve (AUC) of 0.983 (95% confidence interval [CI]: 0.965-1) and 0.910 (95% CI: 0.830-0.990) in the training and validation cohorts, respectively. Decision curve analysis showed that the nomogram and radiomic model outperformed the clinical-factor model in terms of clinical usefulness. Conclusion A nomogram based on grayscale ultrasonic radiomics and clinical features served as a non-invasive tool capable of differentiating PA and AL.
Collapse
Affiliation(s)
- Yi Mao
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - LiPing Jiang
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Jing-Ling Wang
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Fang-Qun Chen
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Wie-Ping Zhang
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| | - Zhi-Xing Liu
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
- Department of Ultrasound, GanJiang New District Peoples Hospital, Nanchang, Jiangxi, China
| | - Chen Li
- Department of Ultrasound, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
3
|
Sunnetci KM, Kaba E, Celiker FB, Alkan A. Deep Network-Based Comprehensive Parotid Gland Tumor Detection. Acad Radiol 2024; 31:157-167. [PMID: 37271636 DOI: 10.1016/j.acra.2023.04.028] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 04/19/2023] [Accepted: 04/21/2023] [Indexed: 06/06/2023]
Abstract
RATIONALE AND OBJECTIVES Salivary gland tumors constitute 2%-6% of all head and neck tumors and are most common in the parotid gland. Magnetic resonance (MR) imaging is the most sensitive imaging modality for diagnosis. Tumor type, localization, and relationship with surrounding structures are important factors for treatment. Therefore, parotid gland tumor segmentation is important. Specialists widely use manual segmentation in diagnosis and treatment. However, considering the development of artificial intelligence-based models today, it is seen that artificial intelligence-based automatic segmentation models can be used instead of manual segmentation, which is a time-consuming technique. Therefore, we segmented parotid gland tumor (PGT) using deep learning-based architectures in the paper. MATERIALS AND METHODS The dataset used in the study includes 102 T1-w, 102 contrast-enhanced T1-w (T1C-w), and 102 T2-w MR images. After cropping the raw and manually segmented images by experts, we obtained the masks of these images. After standardizing the image sizes, we split these images into approximately 80% training set and 20% test set. Hereabouts, we trained six models for these images using ResNet18 and Xception-based DeepLab v3+. We prepared a user-friendly Graphical User Interface application that includes each of these models. RESULTS From the results, the accuracy and weighted Intersection over Union values of the ResNet18-based DeepLab v3+ architecture trained for T1C-w, which is the most successful model in the study, are equal to 0.96153 and 0.92601, respectively. Regarding the results and the literature, it can be seen that the proposed system is competitive in terms of both using MR images and training the models independently for T1-w, T1C-w, and T2-w. Expressing that PGT is usually segmented manually in the literature, we predict that our study can contribute significantly to the literature. CONCLUSION In this study, we prepared and presented a software application that can be easily used by users for automatic PGT segmentation. In addition to predicting the reduction of costs and workload through the study, we developed models with meaningful performance metrics according to the literature.
Collapse
Affiliation(s)
- Kubilay Muhammed Sunnetci
- Osmaniye Korkut Ata University, Department of Electrical and Electronics Engineering, Osmaniye 80000, Turkey (K.M.S.); Kahramanmaraş Sütçü İmam University, Department of Electrical and Electronics Engineering, Kahramanmaraş 46050, Turkey (K.M.S., A.A.).
| | - Esat Kaba
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (E.K., F.B.C.)
| | - Fatma Beyazal Celiker
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (E.K., F.B.C.)
| | - Ahmet Alkan
- Kahramanmaraş Sütçü İmam University, Department of Electrical and Electronics Engineering, Kahramanmaraş 46050, Turkey (K.M.S., A.A.)
| |
Collapse
|
4
|
Yimit Y, Yasin P, Tuersun A, Abulizi A, Jia W, Wang Y, Nijiati M. Differentiation between cerebral alveolar echinococcosis and brain metastases with radiomics combined machine learning approach. Eur J Med Res 2023; 28:577. [PMID: 38071384 PMCID: PMC10709961 DOI: 10.1186/s40001-023-01550-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Cerebral alveolar echinococcosis (CAE) and brain metastases (BM) share similar in locations and imaging appearance. However, they require distinct treatment approaches, with CAE typically treated with chemotherapy and surgery, while BM is managed with radiotherapy and targeted therapy for the primary malignancy. Accurate diagnosis is crucial due to the divergent treatment strategies. PURPOSE This study aims to evaluate the effectiveness of radiomics and machine learning techniques based on magnetic resonance imaging (MRI) to differentiate between CAE and BM. METHODS We retrospectively analyzed MRI images of 130 patients (30 CAE and 100 BM) from Xinjiang Medical University First Affiliated Hospital and The First People's Hospital of Kashi Prefecture, between January 2014 and December 2022. The dataset was divided into training (91 cases) and testing (39 cases) sets. Three dimensional tumors were segmented by radiologists from contrast-enhanced T1WI images on open resources software 3D Slicer. Features were extracted on Pyradiomics, further feature reduction was carried out using univariate analysis, correlation analysis, and least absolute shrinkage and selection operator (LASSO). Finally, we built five machine learning models, support vector machine, logistic regression, linear discrimination analysis, k-nearest neighbors classifier, and Gaussian naïve bias and evaluated their performance via several metrics including sensitivity (recall), specificity, positive predictive value (precision), negative predictive value, accuracy and the area under the curve (AUC). RESULTS The area under curve (AUC) of support vector classifier (SVC), linear discrimination analysis (LDA), k-nearest neighbors (KNN), and gaussian naïve bias (NB) algorithms in training (testing) sets are 0.99 (0.94), 1.00 (0.87), 0.98 (0.92), 0.97 (0.97), and 0.98 (0.93), respectively. Nested cross-validation demonstrated the robustness and generalizability of the models. Additionally, the calibration plot and decision curve analysis demonstrated the practical usefulness of these models in clinical practice, with lower bias toward different subgroups during decision-making. CONCLUSION The combination of radiomics and machine learning approach based on contrast enhanced T1WI images could well distinguish CAE and BM. This approach holds promise in assisting doctors with accurate diagnosis and clinical decision-making.
Collapse
Affiliation(s)
- Yasen Yimit
- Medical Imaging Center, The First People's Hospital of Kashi (Kashgar) Prefecture, Kashi, 844000, People's Republic of China
| | - Parhat Yasin
- Department of Spine Surgery, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, Xinjiang, China
| | - Abuduresuli Tuersun
- Medical Imaging Center, The First People's Hospital of Kashi (Kashgar) Prefecture, Kashi, 844000, People's Republic of China
| | - Abudoukeyoumujiang Abulizi
- Medical Imaging Center, The First People's Hospital of Kashi (Kashgar) Prefecture, Kashi, 844000, People's Republic of China
| | - Wenxiao Jia
- Medical Imaging Center, Xinjiang Medical University Affiliated First Hospital, Urumqi, 830054, People's Republic of China
| | - Yunling Wang
- Medical Imaging Center, Xinjiang Medical University Affiliated First Hospital, Urumqi, 830054, People's Republic of China
| | - Mayidili Nijiati
- Medical Imaging Center, The First People's Hospital of Kashi (Kashgar) Prefecture, Kashi, 844000, People's Republic of China.
| |
Collapse
|