1
|
Meijerink LM, Dunias ZS, Leeuwenberg AM, de Hond AAH, Jenkins DA, Martin GP, Sperrin M, Peek N, Spijker R, Hooft L, Moons KGM, van Smeden M, Schuit E. Updating methods for artificial intelligence-based clinical prediction models: a scoping review. J Clin Epidemiol 2025; 178:111636. [PMID: 39662644 DOI: 10.1016/j.jclinepi.2024.111636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 12/02/2024] [Accepted: 12/03/2024] [Indexed: 12/13/2024]
Abstract
OBJECTIVES To give an overview of methods for updating artificial intelligence (AI)-based clinical prediction models based on new data. STUDY DESIGN AND SETTING We comprehensively searched Scopus and Embase up to August 2022 for articles that addressed developments, descriptions, or evaluations of prediction model updating methods. We specifically focused on articles in the medical domain involving AI-based prediction models that were updated based on new data, excluding regression-based updating methods as these have been extensively discussed elsewhere. We categorized and described the identified methods used to update the AI-based prediction model as well as the use cases in which they were used. RESULTS We included 78 articles. The majority of the included articles discussed updating for neural network methods (93.6%) with medical images as input data (65.4%). In many articles (51.3%) existing, pretrained models for broad tasks were updated to perform specialized clinical tasks. Other common reasons for model updating were to address changes in the data over time and cross-center differences; however, more unique use cases were also identified, such as updating a model from a broad population to a specific individual. We categorized the identified model updating methods into four categories: neural network-specific methods (described in 92.3% of the articles), ensemble-specific methods (2.5%), model-agnostic methods (9.0%), and other (1.3%). Variations of neural network-specific methods are further categorized based on the following: (1) the part of the original neural network that is kept, (2) whether and how the original neural network is extended with new parameters, and (3) to what extent the original neural network parameters are adjusted to the new data. The most frequently occurring method (n = 30) involved selecting the first layer(s) of an existing neural network, appending new, randomly initialized layers, and then optimizing the entire neural network. CONCLUSION We identified many ways to adjust or update AI-based prediction models based on new data, within a large variety of use cases. Updating methods for AI-based prediction models other than neural networks (eg, random forest) appear to be underexplored in clinical prediction research. PLAIN LANGUAGE SUMMARY AI-based prediction models are increasingly used in health care, helping clinicians with diagnosing diseases, guiding treatment decisions, and informing patients. However, these prediction models do not always work well when applied to hospitals, patient populations, or times different from those used to develop the models. Developing new models for every situation is neither practical nor desired, as it wastes resources, time, and existing knowledge. A more efficient approach is to adjust existing models to new contexts ('updating'), but there is limited guidance on how to do this for AI-based clinical prediction models. To address this, we reviewed 78 studies in detail to understand how researchers are currently updating AI-based clinical prediction models, and the types of situations in which these updating methods are used. Our findings provide a comprehensive overview of the available methods to update existing models. This is intended to serve as guidance and inspiration for researchers. Ultimately, this can lead to better reuse of existing models and improve the quality and efficiency of AI-based prediction models in health care.
Collapse
Affiliation(s)
- Lotta M Meijerink
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
| | - Zoë S Dunias
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Artuur M Leeuwenberg
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Anne A H de Hond
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - David A Jenkins
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, United Kingdom
| | - Glen P Martin
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, United Kingdom
| | - Matthew Sperrin
- Division of Informatics, Imaging and Data Sciences, University of Manchester, Manchester, United Kingdom
| | - Niels Peek
- Department of Public Health and Primary Care, The Healthcare Improvement Studies Institute, University of Cambridge, Cambridge, United Kingdom
| | - René Spijker
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Lotty Hooft
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - Ewoud Schuit
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024; 60:2272-2289. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
3
|
Mao Y, Jiang LP, Wang JL, Diao YH, Chen FQ, Zhang WP, Chen L, Liu ZX. Multi-feature Fusion Network on Gray Scale Ultrasonography: Effective Differentiation of Adenolymphoma and Pleomorphic Adenoma. Acad Radiol 2024; 31:4396-4407. [PMID: 38871552 DOI: 10.1016/j.acra.2024.05.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/15/2024]
Abstract
RATIONALE AND OBJECTIVES to develop a deep learning radiomics graph network (DLRN) that integrates deep learning features extracted from gray scale ultrasonography, radiomics features and clinical features, for distinguishing parotid pleomorphic adenoma (PA) from adenolymphoma (AL) MATERIALS AND METHODS: A total of 287 patients (162 in training cohort, 70 in internal validation cohort and 55 in external validation cohort) from two centers with histologically confirmed PA or AL were enrolled. Deep transfer learning features and radiomics features extracted from gray scale ultrasound images were input to machine learning classifiers including logistic regression (LR), support vector machines (SVM), KNN, RandomForest (RF), ExtraTrees, XGBoost, LightGBM, and MLP to construct deep transfer learning radiomics (DTL) models and Rad models respectively. Deep learning radiomics (DLR) models were constructed by integrating the two features and DLR signatures were generated. Clinical features were further combined with the signatures to develop a DLRN model. The performance of these models was evaluated using receiver operating characteristic (ROC) curve analysis, calibration, decision curve analysis (DCA), and the Hosmer-Lemeshow test. RESULTS In the internal validation cohort and external validation cohort, comparing to Clinic (AUC=0.767 and 0.777), Rad (AUC=0.841 and 0.748), DTL (AUC=0.740 and 0.825) and DLR (AUC=0.863 and 0.859), the DLRN model showed greatest discriminatory ability (AUC=0.908 and 0.908) showed optimal discriminatory ability. CONCLUSION The DLRN model built based on gray scale ultrasonography significantly improved the diagnostic performance for benign salivary gland tumors. It can provide clinicians with a non-invasive and accurate diagnostic approach, which holds important clinical significance and value. Ensemble of multiple models helped alleviate overfitting on the small dataset compared to using Resnet50 alone.
Collapse
Affiliation(s)
- Yi Mao
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Li-Ping Jiang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Jing-Ling Wang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Yu-Hong Diao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China.
| | - Fang-Qun Chen
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Wei-Ping Zhang
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Li Chen
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China.
| | - Zhi-Xing Liu
- Department of Ultrasonography, The First Affiliated Hospital of Nanchang University, Nanchang, China; Department of Ultrasonography, GanJiang New District Peoples Hospital, Nanchang, China.
| |
Collapse
|
4
|
Ammari S, Quillent A, Elvira V, Bidault F, Garcia GCTE, Hartl DM, Balleyguier C, Lassau N, Chouzenoux É. Using Machine Learning on MRI Radiomics to Diagnose Parotid Tumours Before Comparing Performance with Radiologists: A Pilot Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01255-y. [PMID: 39390287 DOI: 10.1007/s10278-024-01255-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/31/2024] [Accepted: 08/19/2024] [Indexed: 10/12/2024]
Abstract
The parotid glands are the largest of the major salivary glands. They can harbour both benign and malignant tumours. Preoperative work-up relies on MR images and fine needle aspiration biopsy, but these diagnostic tools have low sensitivity and specificity, often leading to surgery for diagnostic purposes. The aim of this paper is (1) to develop a machine learning algorithm based on MR images characteristics to automatically classify parotid gland tumours and (2) compare its results with the diagnoses of junior and senior radiologists in order to evaluate its utility in routine practice. While automatic algorithms applied to parotid tumours classification have been developed in the past, we believe that our study is one of the first to leverage four different MRI sequences and propose a comparison with clinicians. In this study, we leverage data coming from a cohort of 134 patients treated for benign or malignant parotid tumours. Using radiomics extracted from the MR images of the gland, we train a random forest and a logistic regression to predict the corresponding histopathological subtypes. On the test set, the best results are given by the random forest: we obtain a 0.720 accuracy, a 0.860 specificity, and a 0.720 sensitivity over all histopathological subtypes, with an average AUC of 0.838. When considering the discrimination between benign and malignant tumours, the algorithm results in a 0.760 accuracy and a 0.769 AUC, both on test set. Moreover, the clinical experiment shows that our model helps to improve diagnostic abilities of junior radiologists as their sensitivity and accuracy raised by 6 % when using our proposed method. This algorithm may be useful for training of physicians. Radiomics with a machine learning algorithm may help improve discrimination between benign and malignant parotid tumours, decreasing the need for diagnostic surgery. Further studies are warranted to validate our algorithm for routine use.
Collapse
Affiliation(s)
- Samy Ammari
- Biomaps, UMR1281 INSERM, CEA, CNRS, Université Paris-Saclay, 94805, Villejuif, France
- Department of Imaging, Gustave Roussy Cancer Campus, Université Paris Saclay, 94805, Villejuif, France
| | - Arnaud Quillent
- Centre de Vision Numérique, OPIS, CentraleSupélec, Inria, Université Paris-Saclay, 91190, Gif-sur-Yvette, France
| | - Víctor Elvira
- School of Mathematics, University of Edinburgh, Edinburgh, EH9 3FD, UK
| | - François Bidault
- Biomaps, UMR1281 INSERM, CEA, CNRS, Université Paris-Saclay, 94805, Villejuif, France
- Department of Imaging, Gustave Roussy Cancer Campus, Université Paris Saclay, 94805, Villejuif, France
| | - Gabriel C T E Garcia
- Department of Imaging, Gustave Roussy Cancer Campus, Université Paris Saclay, 94805, Villejuif, France
| | - Dana M Hartl
- Department of Otolaryngology Head and Neck Surgery, Gustave Roussy Cancer Campus, Université Paris Saclay, 94805, Villejuif, France
| | - Corinne Balleyguier
- Biomaps, UMR1281 INSERM, CEA, CNRS, Université Paris-Saclay, 94805, Villejuif, France
- Department of Imaging, Gustave Roussy Cancer Campus, Université Paris Saclay, 94805, Villejuif, France
| | - Nathalie Lassau
- Biomaps, UMR1281 INSERM, CEA, CNRS, Université Paris-Saclay, 94805, Villejuif, France
- Department of Imaging, Gustave Roussy Cancer Campus, Université Paris Saclay, 94805, Villejuif, France
| | - Émilie Chouzenoux
- Centre de Vision Numérique, OPIS, CentraleSupélec, Inria, Université Paris-Saclay, 91190, Gif-sur-Yvette, France.
| |
Collapse
|
5
|
Bourdillon AT. Computer Vision-Radiomics & Pathognomics. Otolaryngol Clin North Am 2024; 57:719-751. [PMID: 38910065 DOI: 10.1016/j.otc.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The role of computer vision in extracting radiographic (radiomics) and histopathologic (pathognomics) features is an extension of molecular biomarkers that have been foundational to our understanding across the spectrum of head and neck disorders. Especially within head and neck cancers, machine learning and deep learning applications have yielded advances in the characterization of tumor features, nodal features, and various outcomes. This review aims to overview the landscape of radiomic and pathognomic applications, informing future work to address gaps. Novel methodologies will be needed to potentially engineer ways of integrating multidimensional data inputs to examine disease features to guide prognosis comprehensively and ultimately clinical management.
Collapse
Affiliation(s)
- Alexandra T Bourdillon
- Department of Otolaryngology-Head & Neck Surgery, University of California-San Francisco, San Francisco, CA 94115, USA.
| |
Collapse
|
6
|
He Y, Zheng B, Peng W, Chen Y, Yu L, Huang W, Qin G. An ultrasound-based ensemble machine learning model for the preoperative classification of pleomorphic adenoma and Warthin tumor in the parotid gland. Eur Radiol 2024; 34:6862-6876. [PMID: 38570381 DOI: 10.1007/s00330-024-10719-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/24/2024] [Accepted: 03/13/2024] [Indexed: 04/05/2024]
Abstract
OBJECTIVES The preoperative classification of pleomorphic adenomas (PMA) and Warthin tumors (WT) in the parotid gland plays an essential role in determining therapeutic strategies. This study aims to develop and validate an ultrasound-based ensemble machine learning (USEML) model, employing nonradiative and noninvasive features to differentiate PMA from WT. METHODS A total of 203 patients with histologically confirmed PMA or WT who underwent parotidectomy from two centers were enrolled. Clinical factors, ultrasound (US) features, and radiomic features were extracted to develop three types of machine learning model: clinical models, US models, and USEML models. The diagnostic performance of the USEML model, as well as that of physicians based on experience, was evaluated and validated using receiver operating characteristic (ROC) curves in internal and external validation cohorts. DeLong's test was used for comparisons of AUCs. SHAP values were also utilized to explain the classification model. RESULTS The USEML model achieved the highest AUC of 0.891 (95% CI, 0.774-0.961), surpassing the AUCs of both the US (0.847; 95% CI, 0.720-0.932) and clinical (0.814; 95% CI, 0.682-0.908) models. The USEML model also outperformed physicians in both internal and external validation datasets (both p < 0.05). The sensitivity, specificity, negative predictive value, and positive predictive value of the USEML model and physician experience were 89.3%/75.0%, 87.5%/54.2%, 87.5%/65.6%, and 89.3%/65.0%, respectively. CONCLUSIONS The USEML model, incorporating clinical factors, ultrasound factors, and radiomic features, demonstrated efficient performance in distinguishing PMA from WT in the parotid gland. CLINICAL RELEVANCE STATEMENT This study developed a machine learning model for preoperative diagnosis of pleomorphic adenoma and Warthin tumor in the parotid gland based on clinical, ultrasound, and radiomic features. Furthermore, it outperformed physicians in an external validation dataset, indicating its potential for clinical application. KEY POINTS • Differentiating pleomorphic adenoma (PMA) and Warthin tumor (WT) affects management decisions and is currently done by invasive biopsy. • Integration of US-radiomic, clinical, and ultrasound findings in a machine learning model results in improved diagnostic accuracy. • The ultrasound-based ensemble machine learning (USEML) model consistently outperforms physicians, suggesting its potential applicability in clinical settings.
Collapse
Affiliation(s)
- Yanping He
- Department of Medical Ultrasonics, The First People's Hospital of Foshan, No. 81, Lingnan Avenue North, Foshan, 528000, China
| | - Bowen Zheng
- Department of Radiology, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Guangzhou, 510515, China
| | - Weiwei Peng
- Department of Medical Ultrasonics, The First People's Hospital of Foshan, No. 81, Lingnan Avenue North, Foshan, 528000, China
| | - Yongyu Chen
- Department of Medical Ultrasonics, The First People's Hospital of Foshan, No. 81, Lingnan Avenue North, Foshan, 528000, China
| | - Lihui Yu
- Department of Medical Ultrasonics, The First People's Hospital of Foshan, No. 81, Lingnan Avenue North, Foshan, 528000, China
| | - Weijun Huang
- Department of Medical Ultrasonics, The First People's Hospital of Foshan, No. 81, Lingnan Avenue North, Foshan, 528000, China.
| | - Genggeng Qin
- Department of Radiology, Nanfang Hospital, Southern Medical University, 1838 Guangzhou Avenue North, Guangzhou, 510515, China.
- Medical Imaging Center, Ganzhou People's Hospital, 16th Meiguan Avenue, Ganzhou, 34100, China.
| |
Collapse
|
7
|
Sunnetci KM, Kaba E, Celiker FB, Alkan A. MR Image Fusion-Based Parotid Gland Tumor Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01137-3. [PMID: 39327379 DOI: 10.1007/s10278-024-01137-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 04/29/2024] [Accepted: 04/30/2024] [Indexed: 09/28/2024]
Abstract
The differentiation of benign and malignant parotid gland tumors is of major significance as it directly affects the treatment process. In addition, it is also a vital task in terms of early and accurate diagnosis of parotid gland tumors and the determination of treatment planning accordingly. As in other diseases, the differentiation of tumor types involves several challenging, time-consuming, and laborious processes. In the study, Magnetic Resonance (MR) images of 114 patients with parotid gland tumors are used for training and testing purposes by Image Fusion (IF). After the Apparent Diffusion Coefficient (ADC), Contrast-enhanced T1-w (T1C-w), and T2-w sequences are cropped, IF (ADC, T1C-w), IF (ADC, T2-w), IF (T1C-w, T2-w), and IF (ADC, T1C-w, T2-w) datasets are obtained for different combinations of these sequences using a two-dimensional Discrete Wavelet Transform (DWT)-based fusion technique. For each of these four datasets, ResNet18, GoogLeNet, and DenseNet-201 architectures are trained separately, and thus, 12 models are obtained in the study. A Graphical User Interface (GUI) application that contains the most successful of these trained architectures for each data is also designed to support the users. The designed GUI application not only allows the fusing of different sequence images but also predicts whether the label of the fused image is benign or malignant. The results show that the DenseNet-201 models for IF (ADC, T1C-w), IF (ADC, T2-w), and IF (ADC, T1C-w, T2-w) are better than the others, with accuracies of 95.45%, 95.96%, and 92.93%, respectively. It is also noted in the study that the most successful model for IF (T1C-w, T2-w) is ResNet18, and its accuracy is equal to 94.95%.
Collapse
Affiliation(s)
- Kubilay Muhammed Sunnetci
- Department of Electrical and Electronics Engineering, Osmaniye Korkut Ata University, Osmaniye, 80000, Turkey
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, 46050, Turkey
| | - Esat Kaba
- Department of Radiology, Recep Tayyip Erdogan University, Rize, 53100, Turkey
| | | | - Ahmet Alkan
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, 46050, Turkey.
| |
Collapse
|
8
|
Li J, Weng J, Du W, Gao M, Cui H, Jiang P, Wang H, Peng X. Machine learning-assisted diagnosis of parotid tumor by using contrast-enhanced CT imaging features. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 126:102030. [PMID: 39233054 DOI: 10.1016/j.jormas.2024.102030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 08/25/2024] [Accepted: 08/31/2024] [Indexed: 09/06/2024]
Abstract
PURPOSE This study aims to develop a machine learning diagnostic model for parotid gland tumors based on preoperative contrast-enhanced CT imaging features to assist in clinical decision-making. MATERIALS AND METHODS Clinical data and contrast-enhanced CT images of 144 patients with parotid gland tumors from the Peking University School of Stomatology Hospital, collected from January 2019 to December 2022, were gathered. The 3D slicer software was utilized to accurately annotate the tumor regions, followed by exploring the correlation between multiple preoperative contrast-enhanced CT imaging features and the benign or malignant nature of the tumor, as well as the type of benign tumor. A prediction model was constructed using the k-nearest neighbors (KNN) algorithm. RESULTS Through feature selection, four key features-morphology, adjacent structure invasion, boundary, and suspicious cervical lymph node metastasis-were identified as crucial in preoperative discrimination between benign and malignant tumors. The KNN prediction model achieved an accuracy rate of 94.44 %. Additionally, six features including arterial phase CT value, age, delayed phase CT value, pre-contrast CT value, venous phase CT value, and gender, were also significant in the classification of benign tumors, with a KNN prediction model accuracy of 95.24 %. CONCLUSION The machine learning model based on preoperative contrast-enhanced CT imaging features can effectively discriminate between benign and malignant parotid gland tumors and classify benign tumors, providing valuable reference information for clinicians.
Collapse
Affiliation(s)
- Jiaqi Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China
| | - Jiuling Weng
- Laboratory of Haihui Data Analysis, School of Mathematical Sciences, Beihang University, Beijing, China
| | - Wen Du
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China
| | - Min Gao
- Department of Geriatric Dentistry, Peking University School and Hospital of Stomatology, Beijing, China
| | - Haobo Cui
- Laboratory of Haihui Data Analysis, School of Mathematical Sciences, Beihang University, Beijing, China
| | - Pingping Jiang
- The Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, Shandong 250012, China
| | - Haihui Wang
- Laboratory of Haihui Data Analysis, School of Mathematical Sciences, Beihang University, Beijing, China.
| | - Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China.
| |
Collapse
|
9
|
Wang HC, Chen CS, Kuo CC, Huang TY, Kuo KH, Chuang TC, Lin YR, Chung HW. Comparative assessment of established and deep learning-based segmentation methods for hippocampal volume estimation in brain magnetic resonance imaging analysis. NMR IN BIOMEDICINE 2024; 37:e5169. [PMID: 38712667 DOI: 10.1002/nbm.5169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 03/21/2024] [Accepted: 04/05/2024] [Indexed: 05/08/2024]
Abstract
In this study, our objective was to assess the performance of two deep learning-based hippocampal segmentation methods, SynthSeg and TigerBx, which are readily available to the public. We contrasted their performance with that of two established techniques, FreeSurfer-Aseg and FSL-FIRST, using three-dimensional T1-weighted MRI scans (n = 1447) procured from public databases. Our evaluation focused on the accuracy and reproducibility of these tools in estimating hippocampal volume. The findings suggest that both SynthSeg and TigerBx are on a par with Aseg and FIRST in terms of segmentation accuracy and reproducibility, but offer a significant advantage in processing speed, generating results in less than 1 min compared with several minutes to hours for the latter tools. In terms of Alzheimer's disease classification based on the hippocampal atrophy rate, SynthSeg and TigerBx exhibited superior performance. In conclusion, we evaluated the capabilities of two deep learning-based segmentation techniques. The results underscore their potential value in clinical and research environments, particularly when investigating neurological conditions associated with hippocampal structures.
Collapse
Affiliation(s)
- Hsi-Chun Wang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Chia-Sho Chen
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Chung-Chin Kuo
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Teng-Yi Huang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Kuei-Hong Kuo
- Division of Medical Image, Far Eastern Memorial Hospital, New Taipei City, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Tzu-Chao Chuang
- Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan
| | - Yi-Ru Lin
- Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hsiao-Wen Chung
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
10
|
Xu Z, Dai Y, Liu F, Wu B, Chen W, Shi L. Swin MoCo: Improving parotid gland MRI segmentation using contrastive learning. Med Phys 2024; 51:5295-5307. [PMID: 38749016 DOI: 10.1002/mp.17128] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 04/23/2024] [Accepted: 04/30/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND Segmentation of the parotid glands and tumors by MR images is essential for treating parotid gland tumors. However, segmentation of the parotid glands is particularly challenging due to their variable shape and low contrast with surrounding structures. PURPOSE The lack of large and well-annotated datasets limits the development of deep learning in medical images. As an unsupervised learning method, contrastive learning has seen rapid development in recent years. It can better use unlabeled images and is hopeful to improve parotid gland segmentation. METHODS We propose Swin MoCo, a momentum contrastive learning network with Swin Transformer as its backbone. The ImageNet supervised model is used as the initial weights of Swin MoCo, thus improving the training effects on small medical image datasets. RESULTS Swin MoCo trained with transfer learning improves parotid gland segmentation to 89.78% DSC, 85.18% mIoU, 3.60 HD, and 90.08% mAcc. On the Synapse multi-organ computed tomography (CT) dataset, using Swin MoCo as the pre-trained model of Swin-Unet yields 79.66% DSC and 12.73 HD, which outperforms the best result of Swin-Unet on the Synapse dataset. CONCLUSIONS The above improvements require only 4 h of training on a single NVIDIA Tesla V100, which is computationally cheap. Swin MoCo provides new approaches to improve the performance of tasks on small datasets. The code is publicly available at https://github.com/Zian-Xu/Swin-MoCo.
Collapse
Affiliation(s)
- Zi'an Xu
- Northeastern University, Shenyang, China
| | - Yin Dai
- Northeastern University, Shenyang, China
| | - Fayu Liu
- China Medical University, Shenyang, China
| | - Boyuan Wu
- Northeastern University, Shenyang, China
| | | | - Lifu Shi
- Liaoning Jiayin Medical Technology Co., Shenyang, China
| |
Collapse
|
11
|
Rao Y, Ma Y, Wang J, Xiao W, Wu J, Shi L, Guo L, Fan L. Performance of radiomics in the differential diagnosis of parotid tumors: a systematic review. Front Oncol 2024; 14:1383323. [PMID: 39119093 PMCID: PMC11306159 DOI: 10.3389/fonc.2024.1383323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 07/01/2024] [Indexed: 08/10/2024] Open
Abstract
Purpose A systematic review and meta-analysis were conducted to evaluate the diagnostic precision of radiomics in the differential diagnosis of parotid tumors, considering the increasing utilization of radiomics in tumor diagnosis. Although some researchers have attempted to apply radiomics in this context, there is ongoing debate regarding its accuracy. Methods Databases of PubMed, Cochrane, EMBASE, and Web of Science up to May 29, 2024 were systematically searched. The quality of included primary studies was assessed using the Radiomics Quality Score (RQS) checklist. The meta-analysis was performed utilizing a bivariate mixed-effects model. Results A total of 39 primary studies were incorporated. The machine learning model relying on MRI radiomics for diagnosis malignant tumors of the parotid gland, demonstrated a sensitivity of 0.80 [95% CI: 0.74, 0.86], SROC of 0.89 [95% CI: 0.27-0.99] in the validation set. The machine learning model based on MRI radiomics for diagnosis malignant tumors of the parotid gland, exhibited a sensitivity of 0.83[95% CI: 0.76, 0.88], SROC of 0.89 [95% CI: 0.17-1.00] in the validation set. The models also demonstrated high predictive accuracy for benign lesions. Conclusion There is great potential for radiomics-based models to improve the accuracy of diagnosing benign and malignant tumors of the parotid gland. To further enhance this potential, future studies should consider implementing standardized radiomics-based features, adopting more robust feature selection methods, and utilizing advanced model development tools. These measures can significantly improve the diagnostic accuracy of artificial intelligence algorithms in distinguishing between benign and malignant tumors of the parotid gland. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023434931.
Collapse
Affiliation(s)
- Yilin Rao
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Yuxi Ma
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Jinghan Wang
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Weiwei Xiao
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Jiaqi Wu
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Liang Shi
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Ling Guo
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| | - Liyuan Fan
- Department of Prosthodontics, The Affiliated Stomatology Hospital, Southwest Medical University, Luzhou, Sichuan, China
- Luzhou Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, The Affiliated Stomatological Hospital, Southwest Medical University, Luzhou, Sichuan, China
| |
Collapse
|
12
|
Li W, Guo E, Zhao H, Li Y, Miao L, Liu C, Sun W. Evaluation of transfer ensemble learning-based convolutional neural network models for the identification of chronic gingivitis from oral photographs. BMC Oral Health 2024; 24:814. [PMID: 39020332 PMCID: PMC11256452 DOI: 10.1186/s12903-024-04460-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 06/07/2024] [Indexed: 07/19/2024] Open
Abstract
BACKGROUND To evaluate the performances of several advanced deep convolutional neural network models (AlexNet, VGG, GoogLeNet, ResNet) based on ensemble learning for recognizing chronic gingivitis from screening oral images. METHODS A total of 683 intraoral clinical images acquired from 134 volunteers were used to construct the database and evaluate the models. Four deep ConvNet models were developed using ensemble learning and outperformed a single model. The performances of the different models were evaluated by comparing the accuracy and sensitivity for recognizing the existence of gingivitis from intraoral images. RESULTS The ResNet model achieved an area under the curve (AUC) value of 97%, while the AUC values for the GoogLeNet, AlexNet, and VGG models were 94%, 92%, and 89%, respectively. Although the ResNet and GoogLeNet models performed best in classifying gingivitis from images, the sensitivity outcomes were not significantly different among the ResNet, GoogLeNet, and Alexnet models (p>0.05). However, the sensitivity of the VGGNet model differed significantly from those of the other models (p < 0.001). CONCLUSION The ResNet and GoogLeNet models show promise for identifying chronic gingivitis from images. These models can help doctors diagnose periodontal diseases efficiently or based on self-examination of the oral cavity by patients.
Collapse
Affiliation(s)
- Wen Li
- Department of Cariology and Endodontics, Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Research Institute of Stomatology, Nanjing University, Nanjing, China
| | - Enting Guo
- Division of Computer Science, The University of Aizu, Aizu, Japan
| | - Hong Zhao
- Division of Computer Science, The University of Aizu, Aizu, Japan
| | - Yuyang Li
- Department of Cariology and Endodontics, Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Research Institute of Stomatology, Nanjing University, Nanjing, China
| | - Leiying Miao
- Department of Cariology and Endodontics, Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Research Institute of Stomatology, Nanjing University, Nanjing, China
| | - Chao Liu
- Department of Orthodontic, Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Research Institute of Stomatology, Nanjing University, Nanjing, China.
| | - Weibin Sun
- Department of Periodontics, Nanjing Stomatological Hospital, Affiliated Hospital of Medical School, Research Institute of Stomatology, Nanjing University, Nanjing, China.
| |
Collapse
|
13
|
Ding H, Wu C, Su Z, Wang T, Zhuang S, Li C, Li Y. Current landscape and future trends in salivary gland oncology research-a bibliometric evaluation. Gland Surg 2024; 13:969-986. [PMID: 39015723 PMCID: PMC11247595 DOI: 10.21037/gs-24-94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 06/11/2024] [Indexed: 07/18/2024]
Abstract
Background The salivary glands are susceptible to both endogenous and exogenous influences, potentially resulting in the development of oncology. With the wide application of various technologies, research in this area has experienced rapid growth. Therefore, researchers must identify and characterize the current research hot topics to grasp the forefront of developments in the dynamic field of salivary gland oncology. The objective of this study was to thoroughly assess the current status and identify potential future research directions in salivary gland oncology. Methods The relevant salivary gland oncology dataset was obtained from the Web of Science Core Collection (WOSCC) database. Subsequently, VoSviewer and CiteSpace were employed for further evaluation. Results A total of 9,695 manuscripts were extracted and downloaded from the WOSCC database. Our findings revealed a substantial surge in research volume over the past 12 years. The researchers' analysis revealed that Abbas Agami showed unparalleled dedication, with over 180 publications, and that RH Spiro had the highest cocitation count, confirming its status as a key figure in the field. The detection of bursts in secretory carcinoma and the integration of artificial intelligence in salivary oncology have attracted increasing interest. Notably, there is a discernible trend towards increased research engagement in the study of salivary gland malignancies. Conclusions This study not only evaluated the current research landscape in salivary gland oncology but also anticipates future trends. These insights could contribute to the advancement of knowledge and policymaking in salivary gland oncology.
Collapse
Affiliation(s)
- Haoran Ding
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Chenzhou Wu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Zhifei Su
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Tianyi Wang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Shiyong Zhuang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Chunjie Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Yi Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Head and Neck Oncology Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
14
|
Faur AC, Buzaș R, Lăzărescu AE, Ghenciu LA. Current Developments in Diagnosis of Salivary Gland Tumors: From Structure to Artificial Intelligence. Life (Basel) 2024; 14:727. [PMID: 38929710 PMCID: PMC11204840 DOI: 10.3390/life14060727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 05/27/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
Salivary glands tumors are uncommon neoplasms with variable incidence, heterogenous histologies and unpredictable biological behaviour. Most tumors are located in the parotid gland. Benign salivary tumors represent 54-79% of cases and pleomorphic adenoma is frequently diagnosed in this group. Salivary glands malignant tumors that are more commonly diagnosed are adenoid cystic carcinomas and mucoepidermoid carcinomas. Because of their diversity and overlapping features, these tumors require complex methods of evaluation. Diagnostic procedures include imaging techniques combined with clinical examination, fine needle aspiration and histopathological investigation of the excised specimens. This narrative review describes the advances in the diagnosis methods of these unusual tumors-from histomorphology to artificial intelligence algorithms.
Collapse
Affiliation(s)
- Alexandra Corina Faur
- Department of Anatomy and Embriology, ”Victor Babeș” University of Medicine and Pharmacy, Eftimie Murgu Square, No. 2, 300041 Timișoara, Romania; (A.C.F.); (A.E.L.)
| | - Roxana Buzaș
- Department of Internal Medicine I, Center for Advanced Research in Cardiovascular Pathology and Hemostaseology, ”Victor Babeș” University of Medicine and Pharmacy, Eftimie Murgu Square, No. 2, 300041 Timișoara, Romania
| | - Adrian Emil Lăzărescu
- Department of Anatomy and Embriology, ”Victor Babeș” University of Medicine and Pharmacy, Eftimie Murgu Square, No. 2, 300041 Timișoara, Romania; (A.C.F.); (A.E.L.)
| | - Laura Andreea Ghenciu
- Department of Functional Sciences, ”Victor Babeș”University of Medicine and Pharmacy, Eftimie Murgu Square, No. 2, 300041 Timișoara, Romania;
| |
Collapse
|
15
|
Wang Y, Gao J, Yin Z, Wen Y, Sun M, Han R. Differentiation of benign and malignant parotid gland tumors based on the fusion of radiomics and deep learning features on ultrasound images. Front Oncol 2024; 14:1384105. [PMID: 38803533 PMCID: PMC11128676 DOI: 10.3389/fonc.2024.1384105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 04/29/2024] [Indexed: 05/29/2024] Open
Abstract
Objective The pathological classification and imaging manifestation of parotid gland tumors are complex, while accurate preoperative identification plays a crucial role in clinical management and prognosis assessment. This study aims to construct and compare the performance of clinical models, traditional radiomics models, deep learning (DL) models, and deep learning radiomics (DLR) models based on ultrasound (US) images in differentiating between benign parotid gland tumors (BPGTs) and malignant parotid gland tumors (MPGTs). Methods Retrospective analysis was conducted on 526 patients with confirmed PGTs after surgery, who were randomly divided into a training set and a testing set in the ratio of 7:3. Traditional radiomics and three DL models (DenseNet121, VGG19, ResNet50) were employed to extract handcrafted radiomics (HCR) features and DL features followed by feature fusion. Seven machine learning classifiers including logistic regression (LR), support vector machine (SVM), RandomForest, ExtraTrees, XGBoost, LightGBM and multi-layer perceptron (MLP) were combined to construct predictive models. The most optimal model was integrated with clinical and US features to develop a nomogram. Receiver operating characteristic (ROC) curve was employed for assessing performance of various models while the clinical utility was assessed by decision curve analysis (DCA). Results The DLR model based on ExtraTrees demonstrated superior performance with AUC values of 0.943 (95% CI: 0.918-0.969) and 0.916 (95% CI: 0.861-0.971) for the training and testing set, respectively. The combined model DLR nomogram (DLRN) further enhanced the performance, resulting in AUC values of 0.960 (95% CI: 0.940- 0.979) and 0.934 (95% CI: 0.876-0.991) for the training and testing sets, respectively. DCA analysis indicated that DLRN provided greater clinical benefits compared to other models. Conclusion DLRN based on US images shows exceptional performance in distinguishing BPGTs and MPGTs, providing more reliable information for personalized diagnosis and treatment plans in clinical practice.
Collapse
Affiliation(s)
| | | | | | | | | | - Ruoling Han
- Department of Ultrasound, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| |
Collapse
|
16
|
Zhang R, Wong LM, So TY, Cai Z, Deng Q, Tsang YM, Ai QYH, King AD. Deep learning for the automatic detection and segmentation of parotid gland tumors on MRI. Oral Oncol 2024; 152:106796. [PMID: 38615586 DOI: 10.1016/j.oraloncology.2024.106796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 04/03/2024] [Accepted: 04/06/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES Parotid gland tumors (PGTs) often occur as incidental findings on magnetic resonance images (MRI) that may be overlooked. This study aimed to construct and validate a deep learning model to automatically identify parotid glands (PGs) with a PGT from normal PGs, and in those with a PGT to segment the tumor. MATERIALS AND METHODS The nnUNet combined with a PG-specific post-processing procedure was used to develop the deep learning model trained on T1-weighed images (T1WI) in 311 patients (180 PGs with tumors and 442 normal PGs) and fat-suppressed (FS)-T2WI in 257 patients (125 PGs with tumors and 389 normal PGs), for detecting and segmenting PGTs with five-fold cross-validation. Additional validation set separated by time, comprising T1WI in 34 and FS-T2WI in 41 patients, was used to validate the model performance. RESULTS AND CONCLUSION To identify PGs with tumors from normal PGs, using combined T1WI and FS-T2WI, the deep learning model achieved an accuracy, sensitivity and specificity of 98.2% (497/506), 100% (119/119) and 97.7% (378/387), respectively, in the cross-validation set and 98.5% (67/68), 100% (20/20) and 97.9% (47/48), respectively, in the validation set. For patients with PGTs, automatic segmentation of PGTs on T1WI and FS-T2WI achieved mean dice coefficients of 86.1% and 84.2%, respectively, in the cross-validation set, and of 85.9% and 81.0%, respectively, in the validation set. The proposed deep learning model may assist the detection and segmentation of PGTs and, by acting as a second pair of eyes, ensure that incidentally detected PGTs on MRI are not missed.
Collapse
Affiliation(s)
- Rongli Zhang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China
| | - Lun M Wong
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China
| | - Tiffany Y So
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China
| | - Zongyou Cai
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China
| | - Qiao Deng
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China
| | - Yip Man Tsang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China
| | - Qi Yong H Ai
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Ann D King
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China.
| |
Collapse
|
17
|
Jiang T, Chen C, Zhou Y, Cai S, Yan Y, Sui L, Lai M, Song M, Zhu X, Pan Q, Wang H, Chen X, Wang K, Xiong J, Chen L, Xu D. Deep learning-assisted diagnosis of benign and malignant parotid tumors based on ultrasound: a retrospective study. BMC Cancer 2024; 24:510. [PMID: 38654281 PMCID: PMC11036551 DOI: 10.1186/s12885-024-12277-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/16/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND To develop a deep learning(DL) model utilizing ultrasound images, and evaluate its efficacy in distinguishing between benign and malignant parotid tumors (PTs), as well as its practicality in assisting clinicians with accurate diagnosis. METHODS A total of 2211 ultrasound images of 980 pathologically confirmed PTs (Training set: n = 721; Validation set: n = 82; Internal-test set: n = 89; External-test set: n = 88) from 907 patients were retrospectively included in this study. The optimal model was selected and the diagnostic performance evaluation is conducted by utilizing the area under curve (AUC) of the receiver-operating characteristic(ROC) based on five different DL networks constructed at varying depths. Furthermore, a comparison of different seniority radiologists was made in the presence of the optimal auxiliary diagnosis model. Additionally, the diagnostic confusion matrix of the optimal model was calculated, and an analysis and summary of misjudged cases' characteristics were conducted. RESULTS The Resnet18 demonstrated superior diagnostic performance, with an AUC value of 0.947, accuracy of 88.5%, sensitivity of 78.2%, and specificity of 92.7% in internal-test set, and with an AUC value of 0.925, accuracy of 89.8%, sensitivity of 83.3%, and specificity of 90.6% in external-test set. The PTs were subjectively assessed twice by six radiologists, both with and without the assisted of the model. With the assisted of the model, both junior and senior radiologists demonstrated enhanced diagnostic performance. In the internal-test set, there was an increase in AUC values by 0.062 and 0.082 for junior radiologists respectively, while senior radiologists experienced an improvement of 0.066 and 0.106 in their respective AUC values. CONCLUSIONS The DL model based on ultrasound images demonstrates exceptional capability in distinguishing between benign and malignant PTs, thereby assisting radiologists of varying expertise levels to achieve heightened diagnostic performance, and serve as a noninvasive imaging adjunct diagnostic method for clinical purposes.
Collapse
Affiliation(s)
- Tian Jiang
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Postgraduate training base Alliance of Wenzhou Medical University (Zhejiang Cancer Hospital), 310022, Hangzhou, Zhejiang, China
- Zhejiang Provincial Research Center for Cancer Intelligent Diagnosis and Molecular Technology, 310022, Hangzhou, Zhejiang, China
| | - Chen Chen
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Yahan Zhou
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Shenzhou Cai
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Yuqi Yan
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Postgraduate training base Alliance of Wenzhou Medical University (Zhejiang Cancer Hospital), 310022, Hangzhou, Zhejiang, China
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Lin Sui
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Postgraduate training base Alliance of Wenzhou Medical University (Zhejiang Cancer Hospital), 310022, Hangzhou, Zhejiang, China
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Min Lai
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Zhejiang Provincial Research Center for Cancer Intelligent Diagnosis and Molecular Technology, 310022, Hangzhou, Zhejiang, China
- Second Clinical College, Zhejiang University of Traditional Chinese Medicine, 310022, Hangzhou, Zhejiang, China
| | - Mei Song
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Zhejiang Provincial Research Center for Cancer Intelligent Diagnosis and Molecular Technology, 310022, Hangzhou, Zhejiang, China
| | - Xi Zhu
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Qianmeng Pan
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Hui Wang
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Xiayi Chen
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China
| | - Kai Wang
- Dongyang Hospital Affiliated to Wenzhou Medical University, 322100, Jinhua, Zhejiang, China
| | - Jing Xiong
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, 518000, Shenzhen, Guangdong, China
| | - Liyu Chen
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China.
- Zhejiang Provincial Research Center for Cancer Intelligent Diagnosis and Molecular Technology, 310022, Hangzhou, Zhejiang, China.
| | - Dong Xu
- Department of Diagnostic Ultrasound Imaging & Interventional Therapy, Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, 310022, Hangzhou, Zhejiang, China.
- Postgraduate training base Alliance of Wenzhou Medical University (Zhejiang Cancer Hospital), 310022, Hangzhou, Zhejiang, China.
- Zhejiang Provincial Research Center for Cancer Intelligent Diagnosis and Molecular Technology, 310022, Hangzhou, Zhejiang, China.
- Wenling Big Data and Artificial Intelligence Institute in Medicine, 317502, TaiZhou, Zhejiang, China.
- Taizhou Key Laboratory of Minimally Invasive Interventional Therapy & Artificial Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer Hospital), 317502, Taizhou, Zhejiang, China.
| |
Collapse
|
18
|
Liu S, Yu B, Zheng X, Guo H, Shi L. Construction and Application of a Nomogram for Predicting Benign and Malignant Parotid Tumors. J Comput Assist Tomogr 2024; 48:143-149. [PMID: 37551140 DOI: 10.1097/rct.0000000000001522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2023]
Abstract
OBJECTIVE A prediction model of benign and malignant differentiation was established by magnetic resonance signs of parotid gland tumors to provide an important basis for the preoperative diagnosis and treatment of parotid gland tumor patients. METHODS The data from 138 patients (modeling group) who were diagnosed based on a pathologic evaluation in the Department of Stomatology of Jilin University from June 2019 to August 2021 were retrospectively analyzed. The independent factors influencing benign and malignant differentiation of parotid tumors were selected by logistic regression analysis, and a mathematical prediction model for benign and malignant tumors was established. The data from 35 patients (validation group) who were diagnosed based on pathologic evaluation from September 2021 to February 2022 were collected for verification. RESULTS Univariate and multivariate logistic regression analysis showed that tumor morphology, tumor boundary, tumor signal, and tumor apparent diffusion coefficient (ADC) were independent risk factors for predicting benign and malignant parotid gland tumors ( P < 0.05). Based on multivariate logistic regression analysis of the modeling group, a mathematical prediction model was established as follows: Y = the ex/(1 + ex) and X = 0.385 + (1.416 × tumor morphology) + (1.473 × tumor border) + (1.306 × tumor signal) + (2.312 × tumor ADC value). The results showed that the area under the receiver operating characteristic curve of the model was 0.832 (95% confidence interval, 0.75-0.91), the sensitivity was 82.6%, and the specificity was 70.65%. The validity of the model was verified using validation group data, for which the sensitivity was 85.71%, the specificity was 96.4%, and the correct rate was 94.3%. The results showed that the area under receiver operating characteristic curve was 0.936 (95% confidence interval, 0.83-0.98). CONCLUSIONS Combined with tumor morphology, tumor ADC, tumor boundary, and tumor signal, the established prediction model provides an important reference for preoperative diagnosis of benign and malignant parotid gland tumors.
Collapse
Affiliation(s)
- Shuo Liu
- From the Department of Radiology, Jilin University Third Hospital
| | - Baoting Yu
- From the Department of Radiology, Jilin University Third Hospital
| | - Xuewei Zheng
- From the Department of Radiology, Jilin University Third Hospital
| | - Hao Guo
- Department of Radiology, Changchun People's Hospital
| | - Lingxue Shi
- Department of Radiology, Jilin Provincial People's Hospital, Changchun City, China
| |
Collapse
|
19
|
Sunnetci KM, Kaba E, Celiker FB, Alkan A. Deep Network-Based Comprehensive Parotid Gland Tumor Detection. Acad Radiol 2024; 31:157-167. [PMID: 37271636 DOI: 10.1016/j.acra.2023.04.028] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 04/19/2023] [Accepted: 04/21/2023] [Indexed: 06/06/2023]
Abstract
RATIONALE AND OBJECTIVES Salivary gland tumors constitute 2%-6% of all head and neck tumors and are most common in the parotid gland. Magnetic resonance (MR) imaging is the most sensitive imaging modality for diagnosis. Tumor type, localization, and relationship with surrounding structures are important factors for treatment. Therefore, parotid gland tumor segmentation is important. Specialists widely use manual segmentation in diagnosis and treatment. However, considering the development of artificial intelligence-based models today, it is seen that artificial intelligence-based automatic segmentation models can be used instead of manual segmentation, which is a time-consuming technique. Therefore, we segmented parotid gland tumor (PGT) using deep learning-based architectures in the paper. MATERIALS AND METHODS The dataset used in the study includes 102 T1-w, 102 contrast-enhanced T1-w (T1C-w), and 102 T2-w MR images. After cropping the raw and manually segmented images by experts, we obtained the masks of these images. After standardizing the image sizes, we split these images into approximately 80% training set and 20% test set. Hereabouts, we trained six models for these images using ResNet18 and Xception-based DeepLab v3+. We prepared a user-friendly Graphical User Interface application that includes each of these models. RESULTS From the results, the accuracy and weighted Intersection over Union values of the ResNet18-based DeepLab v3+ architecture trained for T1C-w, which is the most successful model in the study, are equal to 0.96153 and 0.92601, respectively. Regarding the results and the literature, it can be seen that the proposed system is competitive in terms of both using MR images and training the models independently for T1-w, T1C-w, and T2-w. Expressing that PGT is usually segmented manually in the literature, we predict that our study can contribute significantly to the literature. CONCLUSION In this study, we prepared and presented a software application that can be easily used by users for automatic PGT segmentation. In addition to predicting the reduction of costs and workload through the study, we developed models with meaningful performance metrics according to the literature.
Collapse
Affiliation(s)
- Kubilay Muhammed Sunnetci
- Osmaniye Korkut Ata University, Department of Electrical and Electronics Engineering, Osmaniye 80000, Turkey (K.M.S.); Kahramanmaraş Sütçü İmam University, Department of Electrical and Electronics Engineering, Kahramanmaraş 46050, Turkey (K.M.S., A.A.).
| | - Esat Kaba
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (E.K., F.B.C.)
| | - Fatma Beyazal Celiker
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (E.K., F.B.C.)
| | - Ahmet Alkan
- Kahramanmaraş Sütçü İmam University, Department of Electrical and Electronics Engineering, Kahramanmaraş 46050, Turkey (K.M.S., A.A.)
| |
Collapse
|
20
|
Żurek M, Fus Ł, Niemczyk K, Rzepakowska A. Salivary gland pathologies: evolution in classification and association with unique genetic alterations. Eur Arch Otorhinolaryngol 2023; 280:4739-4750. [PMID: 37439929 PMCID: PMC10562281 DOI: 10.1007/s00405-023-08110-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 07/03/2023] [Indexed: 07/14/2023]
Abstract
PURPOSE The correct classification of salivary gland pathologies is crucial for choosing a treatment method and determining the prognosis. Better outcomes are now achievable thanks to the introduction of new therapy approaches, such as targeted therapies for malignant salivary gland tumors. To apply these in clinical routine, a clear classification of the lesions is required. METHODS The following review examines all changes from the first World Health Organization (WHO) Classification of salivary gland pathologies from 1972 to fifth edition from 2022. Possible developments in the diagnosis and classification of salivary gland pathology are also presented. RESULTS The current WHO classification is the fifth edition. With the development of new diagnostic methods, based on genetic alterations, it provides insight into the molecular basis of lesions. This has resulted in the evolution of classification, introduction of new entities and reclassification of existing ones. CONCLUSIONS Genetic alterations will become increasingly more significant in the identification of salivary gland pathologies in the future. These alterations will be helpful as prognostic and predictive biomarkers, and may also serve as targets for anti-cancer therapies.
Collapse
Affiliation(s)
- Michał Żurek
- Department of Otorhinolaryngology Head and Neck Surgery, Medical University of Warsaw, 1a Banacha Str, 02-097, Warsaw, Poland.
- Doctoral School, Medical University of Warsaw, 61 Żwirki I Wigury Str, 02-091, Warsaw, Poland.
| | - Łukasz Fus
- Department of Pathology, Medical University of Warsaw, 7 Pawińskiego Str, 02-004, Warsaw, Poland
| | - Kazimierz Niemczyk
- Department of Otorhinolaryngology Head and Neck Surgery, Medical University of Warsaw, 1a Banacha Str, 02-097, Warsaw, Poland
| | - Anna Rzepakowska
- Department of Otorhinolaryngology Head and Neck Surgery, Medical University of Warsaw, 1a Banacha Str, 02-097, Warsaw, Poland
| |
Collapse
|
21
|
Jiao T, Li F, Cui Y, Wang X, Li B, Shi F, Xia Y, Zhou Q, Zeng Q. Deep Learning With an Attention Mechanism for Differentiating the Origin of Brain Metastasis Using MR images. J Magn Reson Imaging 2023; 58:1624-1635. [PMID: 36965182 DOI: 10.1002/jmri.28695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/10/2023] [Accepted: 03/10/2023] [Indexed: 03/27/2023] Open
Abstract
BACKGROUND Brain metastasis (BM) is a serious neurological complication of cancer of different origins. The value of deep learning (DL) to identify multiple types of primary origins remains unclear. PURPOSE To distinguish primary site of BM and identify the best DL models. STUDY TYPE Retrospective. POPULATION A total of 449 BM derived from 214 patients (49.5% for female, mean age 58 years) (100 from small cell lung cancer [SCLC], 125 from non-small cell lung cancer [NSCLC], 116 from breast cancer [BC], and 108 from gastrointestinal cancer [GIC]) were included. FIELD STRENGTH/SEQUENCE A 3-T, T1 turbo spin echo (T1-TSE), T2-TSE, T2FLAIR-TSE, DWI echo-planar imaging (DWI-EPI) and contrast-enhanced T1-TSE (CE T1-TSE). ASSESSMENT Lesions were divided into training (n = 285, 153 patients), testing (n = 122, 93 patients), and independent testing cohorts (n = 42, 34 patients). Three-dimensional residual network (3D-ResNet), named 3D ResNet6 and 3D ResNet 18, was proposed for identifying the four origins based on single MRI and combined MRI (T1WI + T2-FLAIR + DWI, CE-T1WI + DWI, CE-T1WI + T2WI + DWI). DL model was used to distinguish lung cancer from non-lung cancer; then SCLC vs. NSCLC for lung cancer classification and BC vs. GIC for non-lung cancer classification was performed. A subjective visual analysis was implemented and compared with DL models. Gradient-weighted class activation mapping (Grad-CAM) was used to visualize the model by heatmaps. STATISTICAL TESTS The area under the receiver operating characteristics curve (AUC) assess each classification performance. RESULTS 3D ResNet18 with Grad-CAM and AIC showed better performance than 3DResNet6, 3DResNet18 and the radiologist for distinguishing lung cancer from non-lung cancer, SCLC from NSCLC, and BC from GIC. For single MRI sequence, T1WI, DWI, and CE-T1WI performed best for lung cancer vs. non-lung cancer, SCLC vs. NSCLC, and BC vs. GIC classifications. The AUC ranged from 0.675 to 0.876 and from 0.684 to 0.800 regarding the testing and independent testing datasets, respectively. For combined MRI sequences, the combination of CE-T1WI + T2WI + DWI performed better for BC vs. GIC (AUCs of 0.788 and 0.848 on testing and independent testing datasets, respectively), while the combined MRI approach (T1WI + T2-FLAIR + DWI, CE-T1WI + DWI) could not achieve higher AUCs for lung cancer vs. non-lung cancer, SCLC vs. NSCLC. Grad-CAM helped for model visualization by heatmaps that focused on tumor regions. DATA CONCLUSION DL models may help to distinguish the origins of BM based on MRI data. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Tianyu Jiao
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
- Shandong First Medical University, Jinan, China
| | - Fuyan Li
- Department of Radiology, Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, China
| | - Yi Cui
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Xiao Wang
- Department of Radiology, Jining No. 1 People's Hospital, Jining, China
| | - Butuo Li
- Department of Radiation Oncology, Shandong Cancer Hospital & Institute, Jinan, China
| | - Feng Shi
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Yuwei Xia
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Qing Zhou
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Qingshi Zeng
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, China
| |
Collapse
|
22
|
Shen XM, Mao L, Yang ZY, Chai ZK, Sun TG, Xu Y, Sun ZJ. Deep learning-assisted diagnosis of parotid gland tumors by using contrast-enhanced CT imaging. Oral Dis 2023; 29:3325-3336. [PMID: 36520552 DOI: 10.1111/odi.14474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 11/23/2022] [Accepted: 12/11/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES Imaging interpretation of the benignancy or malignancy of parotid gland tumors (PGTs) is a critical consideration prior to surgery in view of therapeutic and prognostic values of such discrimination. This study investigates the application of a deep learning-based method for preoperative stratification of PGTs. MATERIALS AND METHODS Using the 3D DenseNet-121 architecture and a dataset consisting of 117 volumetric arterial-phase contrast-enhanced CT scans, we developed a binary classifier for PGT distinction and tested it. We compared the discriminative performance of the model on the test set to that of 12 junior and 12 senior head and neck clinicians. Besides, potential clinical utility of the model was evaluated by measuring changes in unassisted and model-assisted performance of junior clinicians. RESULTS The model finally reached the sensitivity, specificity, PPV, NPV, F1-score of 0.955 (95% CI 0.751-0.998), 0.667 (95% CI 0.241-0.940), 0.913 (95% CI 0.705-0.985), 0.800 (95% CI 0.299-0.989) and 0.933, respectively, comparable to that of practicing clinicians. Furthermore, there were statistically significant increases in junior clinicians' specificity, PPV, NPV and F1-score in differentiating benign from malignant PGTs when unassisted and model-assisted performance of junior clinicians were compared. CONCLUSION Our results provide evidence that deep learning-based method may offer assistance for PGT's binary distinction.
Collapse
Affiliation(s)
- Xue-Meng Shen
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Liang Mao
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Oral Maxillofacial-Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Zhi-Yi Yang
- School of Computer Science, Wuhan University, Wuhan, China
| | - Zi-Kang Chai
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Ting-Guan Sun
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Yongchao Xu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Zhi-Jun Sun
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China
- Department of Oral Maxillofacial-Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| |
Collapse
|
23
|
HaLiMaiMaiTi N, Hong Y, Li M, Li H, Wang Y, Chen C, Lv X, Chen C. Classification of benign and malignant parotid tumors based on CT images combined with stack generalization model. Med Biol Eng Comput 2023; 61:3123-3135. [PMID: 37656333 DOI: 10.1007/s11517-023-02898-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 07/09/2023] [Indexed: 09/02/2023]
Abstract
Parotid tumors are among the most prevalent tumors in otolaryngology, and malignant parotid tumors are one of the main causes of facial paralysis in patients. Currently, the main diagnostic modality for parotid tumors is computed tomography, which relies mainly on the subjective judgment of clinicians and leads to practical problems such as high workloads. Therefore, to assist physicians in solving the preoperative classification problem, a stacked generalization model is proposed for the automated classification of parotid tumor images. A ResNet50 pretrained model is used for feature extraction. The first layer of the adopted stacked generalization model consists of multiple weak learners, and the results of the weak learners are integrated as input data in a meta-classifier in the second layer. The output results of the meta-classifier are the final classification results. The classification accuracy of the stacked generalization model reaches 91%. Comparing the classification results under different classifiers, the stacked generalization model used in this study can identify benign and malignant tumors in the parotid gland effectively, thus relieving physicians of tedious work pressure.
Collapse
Affiliation(s)
| | - Yue Hong
- People's Hospital of Xinjiang Uygur Autonomous Region, UrumqiXinjiang, 830001, China
| | - Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Hongtao Li
- The Affiliated Cancer Hospital of Xinjiang Medical University, Urumqi, 830011, China
| | - Yunling Wang
- The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830000, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Xiaoyi Lv
- College of Software, Xinjiang University, Urumqi, 830046, China.
- Key Laboratory of Software Engineering Technology, Xinjiang University, Urumqi, 830046, China.
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China.
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, 830046, China.
| |
Collapse
|
24
|
Radke KL, Kamp B, Adriaenssens V, Stabinska J, Gallinnis P, Wittsack HJ, Antoch G, Müller-Lutz A. Deep Learning-Based Denoising of CEST MR Data: A Feasibility Study on Applying Synthetic Phantoms in Medical Imaging. Diagnostics (Basel) 2023; 13:3326. [PMID: 37958222 PMCID: PMC10650582 DOI: 10.3390/diagnostics13213326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 10/18/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Chemical Exchange Saturation Transfer (CEST) magnetic resonance imaging (MRI) provides a novel method for analyzing biomolecule concentrations in tissues without exogenous contrast agents. Despite its potential, achieving a high signal-to-noise ratio (SNR) is imperative for detecting small CEST effects. Traditional metrics such as Magnetization Transfer Ratio Asymmetry (MTRasym) and Lorentzian analyses are vulnerable to image noise, hampering their precision in quantitative concentration estimations. Recent noise-reduction algorithms like principal component analysis (PCA), nonlocal mean filtering (NLM), and block matching combined with 3D filtering (BM3D) have shown promise, as there is a burgeoning interest in the utilization of neural networks (NNs), particularly autoencoders, for imaging denoising. This study uses the Bloch-McConnell equations, which allow for the synthetic generation of CEST images and explores NNs efficacy in denoising these images. Using synthetically generated phantoms, autoencoders were created, and their performance was compared with traditional denoising methods using various datasets. The results underscored the superior performance of NNs, notably the ResUNet architectures, in noise identification and abatement compared to analytical approaches across a wide noise gamut. This superiority was particularly pronounced at elevated noise intensities in the in vitro data. Notably, the neural architectures significantly improved the PSNR values, achieving up to 35.0, while some traditional methods struggled, especially in low-noise reduction scenarios. However, the application to the in vivo data presented challenges due to varying noise profiles. This study accentuates the potential of NNs as robust denoising tools, but their translation to clinical settings warrants further investigation.
Collapse
Affiliation(s)
- Karl Ludger Radke
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| | - Benedikt Kamp
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| | - Vibhu Adriaenssens
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| | - Julia Stabinska
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA
- Division of MR Research, The Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Patrik Gallinnis
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| | - Hans-Jörg Wittsack
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| | - Gerald Antoch
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| | - Anja Müller-Lutz
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Dusseldorf, 40225 Dusseldorf, Germany (G.A.); (A.M.-L.)
| |
Collapse
|
25
|
Fujima N, Kamagata K, Ueda D, Fujita S, Fushimi Y, Yanagawa M, Ito R, Tsuboyama T, Kawamura M, Nakaura T, Yamada A, Nozaki T, Fujioka T, Matsui Y, Hirata K, Tatsugami F, Naganawa S. Current State of Artificial Intelligence in Clinical Applications for Head and Neck MR Imaging. Magn Reson Med Sci 2023; 22:401-414. [PMID: 37532584 PMCID: PMC10552661 DOI: 10.2463/mrms.rev.2023-0047] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/09/2023] [Indexed: 08/04/2023] Open
Abstract
Due primarily to the excellent soft tissue contrast depictions provided by MRI, the widespread application of head and neck MRI in clinical practice serves to assess various diseases. Artificial intelligence (AI)-based methodologies, particularly deep learning analyses using convolutional neural networks, have recently gained global recognition and have been extensively investigated in clinical research for their applicability across a range of categories within medical imaging, including head and neck MRI. Analytical approaches using AI have shown potential for addressing the clinical limitations associated with head and neck MRI. In this review, we focus primarily on the technical advancements in deep-learning-based methodologies and their clinical utility within the field of head and neck MRI, encompassing aspects such as image acquisition and reconstruction, lesion segmentation, disease classification and diagnosis, and prognostic prediction for patients presenting with head and neck diseases. We then discuss the limitations of current deep-learning-based approaches and offer insights regarding future challenges in this field.
Collapse
Affiliation(s)
- Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Kyoto, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Kumamoto, Kumamoto, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Okayama, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Hiroshima, Hiroshima, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
26
|
Terzi DS, Azginoglu N. In-Domain Transfer Learning Strategy for Tumor Detection on Brain MRI. Diagnostics (Basel) 2023; 13:2110. [PMID: 37371005 DOI: 10.3390/diagnostics13122110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 06/16/2023] [Accepted: 06/16/2023] [Indexed: 06/29/2023] Open
Abstract
Transfer learning has gained importance in areas where there is a labeled data shortage. However, it is still controversial as to what extent natural image datasets as pre-training sources contribute scientifically to success in different fields, such as medical imaging. In this study, the effect of transfer learning for medical object detection was quantitatively compared using natural and medical image datasets. Within the scope of this study, transfer learning strategies based on five different weight initialization methods were discussed. A natural image dataset MS COCO and brain tumor dataset BraTS 2020 were used as the transfer learning source, and Gazi Brains 2020 was used for the target. Mask R-CNN was adopted as a deep learning architecture for its capability to effectively handle both object detection and segmentation tasks. The experimental results show that transfer learning from the medical image dataset was found to be 10% more successful and showed 24% better convergence performance than the MS COCO pre-trained model, although it contains fewer data. While the effect of data augmentation on the natural image pre-trained model was 5%, the same domain pre-trained model was measured as 2%. According to the most widely used object detection metric, transfer learning strategies using MS COCO weights and random weights showed the same object detection performance as data augmentation. The performance of the most effective strategies identified in the Mask R-CNN model was also tested with YOLOv8. Results showed that even if the amount of data is less than the natural dataset, in-domain transfer learning is more efficient than cross-domain transfer learning. Moreover, this study demonstrates the first use of the Gazi Brains 2020 dataset, which was generated to address the lack of labeled and qualified brain MRI data in the medical field for in-domain transfer learning. Thus, knowledge transfer was carried out from the deep neural network, which was trained with brain tumor data and tested on a different brain tumor dataset.
Collapse
Affiliation(s)
- Duygu Sinanc Terzi
- Department of Computer Engineering, Amasya University, Amasya 05100, Turkey
| | - Nuh Azginoglu
- Department of Computer Engineering, Kayseri University, Kayseri 38280, Turkey
| |
Collapse
|
27
|
Adeoye J, Hui L, Su YX. Data-centric artificial intelligence in oncology: a systematic review assessing data quality in machine learning models for head and neck cancer. JOURNAL OF BIG DATA 2023; 10:28. [DOI: 10.1186/s40537-023-00703-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/23/2023] [Indexed: 01/03/2025]
Abstract
AbstractMachine learning models have been increasingly considered to model head and neck cancer outcomes for improved screening, diagnosis, treatment, and prognostication of the disease. As the concept of data-centric artificial intelligence is still incipient in healthcare systems, little is known about the data quality of the models proposed for clinical utility. This is important as it supports the generalizability of the models and data standardization. Therefore, this study overviews the quality of structured and unstructured data used for machine learning model construction in head and neck cancer. Relevant studies reporting on the use of machine learning models based on structured and unstructured custom datasets between January 2016 and June 2022 were sourced from PubMed, EMBASE, Scopus, and Web of Science electronic databases. Prediction model Risk of Bias Assessment (PROBAST) tool was used to assess the quality of individual studies before comprehensive data quality parameters were assessed according to the type of dataset used for model construction. A total of 159 studies were included in the review; 106 utilized structured datasets while 53 utilized unstructured datasets. Data quality assessments were deliberately performed for 14.2% of structured datasets and 11.3% of unstructured datasets before model construction. Class imbalance and data fairness were the most common limitations in data quality for both types of datasets while outlier detection and lack of representative outcome classes were common in structured and unstructured datasets respectively. Furthermore, this review found that class imbalance reduced the discriminatory performance for models based on structured datasets while higher image resolution and good class overlap resulted in better model performance using unstructured datasets during internal validation. Overall, data quality was infrequently assessed before the construction of ML models in head and neck cancer irrespective of the use of structured or unstructured datasets. To improve model generalizability, the assessments discussed in this study should be introduced during model construction to achieve data-centric intelligent systems for head and neck cancer management.
Collapse
|
28
|
[Indications for fine-needle aspiration and core needle biopsy for diagnosis of salivary gland tumors]. HNO 2023; 71:154-163. [PMID: 35376970 DOI: 10.1007/s00106-022-01160-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/13/2022] [Indexed: 11/04/2022]
Abstract
BACKGROUND Salivary gland malignancies are rare neoplasms of the head and neck area. Preoperative clinical and imaging assessment of salivary gland masses is challenging. However, preoperative identification of malignancy is crucial for further treatment and for the course of the disease. OBJECTIVE This article presents the advantages and disadvantages of fine-needle aspiration cytology (FNAC) and core needle biopsy (CNB). Additionally, the sensitivity and specificity of both methods for predicting malignancy were analyzed. Furthermore, it is discussed which procedure is suitable for the diagnostic work-up of salivary gland tumors. MATERIALS AND METHODS This current article summarizes important and recent studies in the field of the diagnostic work-up for salivary gland lesions, with discussion of original articles, metanalyses, and systematic reviews concerning FNAC and CNB. RESULTS The sensitivity and specificity of the predictive ability of FNAC for malignancy is described at between 70.0-80.0% and 87.5-97.9%. The pooled sensitivity and specificity for CNB were 92.0-98.0% and 95.0-100.0%, respectively. Tumor cell seeding or facial nerve palsy are very rare complications of both procedures. CONCLUSION If malignancy is suspected based on clinical examination or imaging, FNAC or CNB should be performed. FNAC is easy to perform; however, an onsite cytologist is necessary. CNB has a higher sensitivity for routine diagnosis of malignancy; tumor typing and grading is facilitated by preserving the histological architecture. In conclusion, CNB is the procedure of choice in the diagnostic work-up for suspected malignant salivary gland tumors.
Collapse
|
29
|
Liu X, Pan Y, Zhang X, Sha Y, Wang S, Li H, Liu J. A Deep Learning Model for Classification of Parotid Neoplasms Based on Multimodal Magnetic Resonance Image Sequences. Laryngoscope 2023; 133:327-335. [PMID: 35575610 PMCID: PMC10083903 DOI: 10.1002/lary.30154] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/17/2022] [Accepted: 04/12/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVE To design a deep learning model based on multimodal magnetic resonance image (MRI) sequences for automatic parotid neoplasm classification, and to improve the diagnostic decision-making in clinical settings. METHODS First, multimodal MRI sequences were collected from 266 patients with parotid neoplasms, and an artificial intelligence (AI)-based deep learning model was designed from scratch, combining the image classification network of Resnet and the Transformer network of Natural language processing. Second, the effectiveness of the deep learning model was improved through the multi-modality fusion of MRI sequences, and the fusion strategy of various MRI sequences was optimized. In addition, we compared the effectiveness of the model in the parotid neoplasm classification with experienced radiologists. RESULTS The deep learning model delivered reliable outcomes in differentiating benign and malignant parotid neoplasms. The model, which was trained by the fusion of T2-weighted, postcontrast T1-weighted, and diffusion-weighted imaging (b = 1000 s/mm2 ), produced the best result, with an accuracy score of 0.85, an area under the receiver operator characteristic (ROC) curve of 0.96, a sensitivity score of 0.90, and a specificity score of 0.84. In addition, the multi-modal paradigm exhibited reliable outcomes in diagnosing the pleomorphic adenoma and the Warthin tumor, but not in the identification of the basal cell adenoma. CONCLUSION An accurate and efficient AI based classification model was produced to classify parotid neoplasms, resulting from the fusion of multimodal MRI sequences. The effectiveness certainly outperformed the model with single MRI images or single MRI sequences as input, and potentially, experienced radiologists. LEVEL OF EVIDENCE 3 Laryngoscope, 133:327-335, 2023.
Collapse
Affiliation(s)
- Xu Liu
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| | - Yucheng Pan
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Xin Zhang
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| | - Yongfang Sha
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| | - Shihui Wang
- Lab of Sensing and Computing, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Hongzhe Li
- Research Service, VA Loma Linda Healthcare System, Loma Linda, California, U.S.A.,Department of Otolaryngology-Head and Neck Surgery, Loma Linda University School of Medicine, Loma Linda, California, U.S.A
| | - Jianping Liu
- ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, China.,ENT Institute and Department of Otorhinolaryngology, Eye & ENT Hospital, NHC Key Laboratory of Hearing Medicine (Fudan University), Shanghai, China
| |
Collapse
|
30
|
Martínez-Ruiz-Coello MDM, Hernández-García E, Miranda-Sánchez E, García-García C, Arenas-Brítez Ó, Plaza-Mayor G. Tratamiento quirúrgico de la patología tumoral de la glándula parótida. Estudio descriptivo de 263 parotidectomías. REVISTA ORL 2022. [DOI: 10.14201/orl.29831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Introducción y objetivo: Los tumores salivales representan el 3-10% de los tumores de cabeza y cuello, siendo el 75-80% de origen parotídeo y en su mayoría benignos. La parotidectomía es una técnica quirúrgica que consiste en la exéresis de la glándula parótida. Existen diversos tipos; parotidectomía superficial (PS), parotidectomía superficial parcial (PSP) y parotidectomía total (PT). En esta última, al no respetarse el nervio facial (NF), las complicaciones son más frecuentes. Nuestro objetivo es analizar el resultado (tasa de recidiva y complicaciones) de la parotidectomía como técnica quirúrgica empleada en el manejo de la patología parotídea, así como evaluar qué prueba complementaria es la más eficaz en el diagnostico prequirúrgico de la patología parotídea tumoral. Material y método: Se realiza un estudio retrospectivo incluyendo 263 pacientes tratados mediante PS o PT entre enero de 2004 y diciembre de 2020 en el Hospital Universitario de Fuenlabrada. Se registraron datos demográficos, tiempo de evolución de la lesión, pruebas complementarias, protocolo quirúrgico y complicaciones postoperatorias. Se analiza principalmente la correlación positiva entre las pruebas realizadas prequirúrgicas (PAAF, ecografía, TC y RMN), con el diagnóstico definitivo anatomopatológico obtenido tras examinar la pieza quirúrgica. También se describe la tasa de paresia y parálisis facial y otras complicaciones habidas. Resultados: Se incluyeron 263 pacientes tratados mediante parotidectomía. El tiempo de evolución medio de las lesiones parotídeas fue de 15 meses (DE 19.88). La sensibilidad de la PAAF en nuestro estudio fue de 68.7%. Se realizó ecografía en un 44.10% de los pacientes, TC en un 77.94% y RMN en un 15.20%, mostrando una sensibilidad de 18.05%, 31.21% y 45%, respectivamente. La cirugía más frecuente fue la PS (43.3%, 114/263), seguida por la PSP (41.1%, 108/263) y, por último, la menos habitual fue la PT (15.58%, 41/263). Los tumores benignos fueron más frecuentes (84.79%, 223/263), siendo el adenoma pleomorfo el más frecuente, 45.73% (102/223). Dentro del grupo de tumores malignos (15.20%, 40/263), el más habitual fue el carcinoma mucoepidermoide (17.5%, 7/40) y las metástasis (17.5%, 7/40). La paresia facial, según la escala de House-Brackmann, fue leve (grado I y II) y transitoria en la mayoría de los casos, apareciendo en un 31.55%. Tras un periodo medio de seguimiento de 6 años no se han encontrado recidivas post parotidectomía por ningún tipo tumoral en nuestro estudio. Conclusión: En nuestra muestra, los tumores benignos representaron la gran mayoría de la patología parotídea. Dentro de este grupo, el adenoma pleomorfo fue el más frecuente. La PAAF fue la prueba complementaria con mejor correlación con el diagnostico anatomopatológico definitivo, seguida por la RMN. La paresia facial leve (grados I y II) y transitoria fue la complicación postquirúrgica mas habitual.
Collapse
|
31
|
Liang S, Dong X, Yang K, Chu Z, Tang F, Ye F, Chen B, Guan J, Zhang Y. A multi-perspective information aggregation network for automated T-staging detection of nasopharyngeal carcinoma. Phys Med Biol 2022; 67. [PMID: 36541557 DOI: 10.1088/1361-6560/aca516] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 11/22/2022] [Indexed: 11/23/2022]
Abstract
AccurateT-staging is important when planning personalized radiotherapy. However,T-staging via manual slice-by-slice inspection is time-consuming while tumor sizes and shapes are heterogeneous, and junior physicians find such inspection challenging. With inspiration from oncological diagnostics, we developed a multi-perspective aggregation network that incorporated various diagnosis-oriented knowledge which allowed automated nasopharyngeal carcinomaT-staging detection (TSD Net). Specifically, our TSD Net was designed in multi-branch architecture, which can capture tumor size and shape information (basic knowledge), strongly correlated contextual features, and associations between the tumor and surrounding tissues. We defined the association between the tumor and surrounding tissues by a signed distance map which can embed points and tumor contours in higher-dimensional spaces, yielding valuable information regarding the locations of tissue associations. TSD Net finally outputs aT1-T4 stage prediction by aggregating data from the three branches. We evaluated TSD Net by using the T1-weighted contrast-enhanced magnetic resonance imaging database of 320 patients in a three-fold cross-validation manner. The results show that the proposed method achieves a mean area under the curve (AUC) as high as 87.95%. We also compared our method to traditional classifiers and a deep learning-based method. Our TSD Net is efficient and accurate and outperforms other methods.
Collapse
Affiliation(s)
- Shujun Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Xiuyu Dong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Kaifan Yang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Zhiqin Chu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Fan Tang
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Feng Ye
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Bei Chen
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Jian Guan
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, 510515, People's Republic of China
| |
Collapse
|
32
|
Radiomics for Discriminating Benign and Malignant Salivary Gland Tumors; Which Radiomic Feature Categories and MRI Sequences Should Be Used? Cancers (Basel) 2022; 14:cancers14235804. [PMID: 36497285 PMCID: PMC9740105 DOI: 10.3390/cancers14235804] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 11/12/2022] [Accepted: 11/22/2022] [Indexed: 11/26/2022] Open
Abstract
The lack of a consistent MRI radiomic signature, partly due to the multitude of initial feature analyses, limits the widespread clinical application of radiomics for the discrimination of salivary gland tumors (SGTs). This study aimed to identify the optimal radiomics feature category and MRI sequence for characterizing SGTs, which could serve as a step towards obtaining a consensus on a radiomics signature. Preliminary radiomics models were built to discriminate malignant SGTs (n = 34) from benign SGTs (n = 57) on T1-weighted (T1WI), fat-suppressed (FS)-T2WI and contrast-enhanced (CE)-T1WI images using six feature categories. The discrimination performances of these preliminary models were evaluated using 5-fold-cross-validation with 100 repetitions and the area under the receiver operating characteristic curve (AUC). The differences between models’ performances were identified using one-way ANOVA. Results show that the best feature categories were logarithm for T1WI and CE-T1WI and exponential for FS-T2WI, with AUCs of 0.828, 0.754 and 0.819, respectively. These AUCs were higher than the AUCs obtained using all feature categories combined, which were 0.750, 0.707 and 0.774, respectively (p < 0.001). The highest AUC (0.846) was obtained using a combination of T1WI + logarithm and FS-T2WI + exponential features, which reduced the initial features by 94.0% (from 1015 × 3 to 91 × 2). CE-T1WI did not improve performance. Using one feature category rather than all feature categories combined reduced the number of initial features without compromising radiomic performance.
Collapse
|
33
|
Hsu K, Yuh DY, Lin SC, Lyu PS, Pan GX, Zhuang YC, Chang CC, Peng HH, Lee TY, Juan CH, Juan CE, Liu YJ, Juan CJ. Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography. Sci Rep 2022; 12:19809. [PMID: 36396696 PMCID: PMC9672125 DOI: 10.1038/s41598-022-23901-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 11/07/2022] [Indexed: 11/18/2022] Open
Abstract
Deep learning allows automatic segmentation of teeth on cone beam computed tomography (CBCT). However, the segmentation performance of deep learning varies among different training strategies. Our aim was to propose a 3.5D U-Net to improve the performance of the U-Net in segmenting teeth on CBCT. This study retrospectively enrolled 24 patients who received CBCT. Five U-Nets, including 2Da U-Net, 2Dc U-Net, 2Ds U-Net, 2.5Da U-Net, 3D U-Net, were trained to segment the teeth. Four additional U-Nets, including 2.5Dv U-Net, 3.5Dv5 U-Net, 3.5Dv4 U-Net, and 3.5Dv3 U-Net, were obtained using majority voting. Mathematical morphology operations including erosion and dilation (E&D) were applied to remove diminutive noise speckles. Segmentation performance was evaluated by fourfold cross validation using Dice similarity coefficient (DSC), accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV). Kruskal-Wallis test with post hoc analysis using Bonferroni correction was used for group comparison. P < 0.05 was considered statistically significant. Performance of U-Nets significantly varies among different training strategies for teeth segmentation on CBCT (P < 0.05). The 3.5Dv5 U-Net and 2.5Dv U-Net showed DSC and PPV significantly higher than any of five originally trained U-Nets (all P < 0.05). E&D significantly improved the DSC, accuracy, specificity, and PPV (all P < 0.005). The 3.5Dv5 U-Net achieved highest DSC and accuracy among all U-Nets. The segmentation performance of the U-Net can be improved by majority voting and E&D. Overall speaking, the 3.5Dv5 U-Net achieved the best segmentation performance among all U-Nets.
Collapse
Affiliation(s)
- Kang Hsu
- grid.260565.20000 0004 0634 0356Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC ,grid.260565.20000 0004 0634 0356School of Dentistry and Graduate Institute of Dental Science, National Defense Medical Center, Taipei, Taiwan, ROC
| | - Da-Yo Yuh
- grid.260565.20000 0004 0634 0356Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC
| | - Shao-Chieh Lin
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Ph.D. Program in Electrical and Communication Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Pin-Sian Lyu
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Guan-Xin Pan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Yi-Chun Zhuang
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Chia-Ching Chang
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.260539.b0000 0001 2059 7017Department of Management Science, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Hsu-Hsia Peng
- grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Tung-Yang Lee
- grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC ,grid.413844.e0000 0004 0638 8798Cheng Ching Hospital, Taichung, Taiwan, ROC
| | - Cheng-Hsuan Juan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC ,grid.413844.e0000 0004 0638 8798Cheng Ching Hospital, Taichung, Taiwan, ROC
| | - Cheng-En Juan
- grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Yi-Jui Liu
- grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Chun-Jung Juan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC ,grid.254145.30000 0001 0083 6092Department of Radiology, School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan, ROC ,grid.411508.90000 0004 0572 9415Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan, ROC ,grid.260565.20000 0004 0634 0356Department of Biomedical Engineering, National Defense Medical Center, Taipei, Taiwan, ROC ,grid.19188.390000 0004 0546 0241Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC
| |
Collapse
|
34
|
Zhu Y, Meng Z, Fan X, Duan Y, Jia Y, Dong T, Wang Y, Song J, Tian J, Wang K, Nie F. Deep learning radiomics of dual-modality ultrasound images for hierarchical diagnosis of unexplained cervical lymphadenopathy. BMC Med 2022; 20:269. [PMID: 36008835 PMCID: PMC9410737 DOI: 10.1186/s12916-022-02469-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Accurate diagnosis of unexplained cervical lymphadenopathy (CLA) using medical images heavily relies on the experience of radiologists, which is even worse for CLA patients in underdeveloped countries and regions, because of lack of expertise and reliable medical history. This study aimed to develop a deep learning (DL) radiomics model based on B-mode and color Doppler ultrasound images for assisting radiologists to improve their diagnoses of the etiology of unexplained CLA. METHODS Patients with unexplained CLA who received ultrasound examinations from three hospitals located in underdeveloped areas of China were retrospectively enrolled. They were all pathologically confirmed with reactive hyperplasia, tuberculous lymphadenitis, lymphoma, or metastatic carcinoma. By mimicking the diagnosis logic of radiologists, three DL sub-models were developed to achieve the primary diagnosis of benign and malignant, the secondary diagnosis of reactive hyperplasia and tuberculous lymphadenitis in benign candidates, and of lymphoma and metastatic carcinoma in malignant candidates, respectively. Then, a CLA hierarchical diagnostic model (CLA-HDM) integrating all sub-models was proposed to classify the specific etiology of each unexplained CLA. The assistant effectiveness of CLA-HDM was assessed by comparing six radiologists between without and with using the DL-based classification and heatmap guidance. RESULTS A total of 763 patients with unexplained CLA were enrolled and were split into the training cohort (n=395), internal testing cohort (n=171), and external testing cohorts 1 (n=105) and 2 (n=92). The CLA-HDM for diagnosing four common etiologies of unexplained CLA achieved AUCs of 0.873 (95% CI: 0.838-0.908), 0.837 (95% CI: 0.789-0.889), and 0.840 (95% CI: 0.789-0.898) in the three testing cohorts, respectively, which was systematically more accurate than all the participating radiologists. With its assistance, the accuracy, sensitivity, and specificity of six radiologists with different levels of experience were generally improved, reducing the false-negative rate of 2.2-10% and the false-positive rate of 0.7-3.1%. CONCLUSIONS Multi-cohort testing demonstrated our DL model integrating dual-modality ultrasound images achieved accurate diagnosis of unexplained CLA. With its assistance, the gap between radiologists with different levels of experience was narrowed, which is potentially of great significance for benefiting CLA patients in underdeveloped countries and regions worldwide.
Collapse
Affiliation(s)
- Yangyang Zhu
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou University, Lanzhou, 730030, China.,CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Zheling Meng
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Xiao Fan
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou University, Lanzhou, 730030, China
| | - Yin Duan
- Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China
| | - Yingying Jia
- Department of Ultrasound, People's Hospital of Ningxia Hui Autonomous Region, Yinchuan, China
| | - Tiantian Dong
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou University, Lanzhou, 730030, China
| | - Yanfang Wang
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou University, Lanzhou, 730030, China
| | - Juan Song
- Department of Ultrasound, People's Hospital of Ningxia Hui Autonomous Region, Yinchuan, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China. .,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| | - Fang Nie
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou University, Lanzhou, 730030, China. .,Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China. .,Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China.
| |
Collapse
|
35
|
Hu Z, Wang B, Pan X, Cao D, Gao A, Yang X, Chen Y, Lin Z. Using deep learning to distinguish malignant from benign parotid tumors on plain computed tomography images. Front Oncol 2022; 12:919088. [PMID: 35978811 PMCID: PMC9376440 DOI: 10.3389/fonc.2022.919088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Objectives Evaluating the diagnostic efficiency of deep-learning models to distinguish malignant from benign parotid tumors on plain computed tomography (CT) images. Materials and methods The CT images of 283 patients with parotid tumors were enrolled and analyzed retrospectively. Of them, 150 were benign and 133 were malignant according to pathology results. A total of 917 regions of interest of parotid tumors were cropped (456 benign and 461 malignant). Three deep-learning networks (ResNet50, VGG16_bn, and DenseNet169) were used for diagnosis (approximately 3:1 for training and testing). The diagnostic efficiencies (accuracy, sensitivity, specificity, and area under the curve [AUC]) of three networks were calculated and compared based on the 917 images. To simulate the process of human diagnosis, a voting model was developed at the end of the networks and the 283 tumors were classified as benign or malignant. Meanwhile, 917 tumor images were classified by two radiologists (A and B) and original CT images were classified by radiologist B. The diagnostic efficiencies of the three deep-learning network models (after voting) and the two radiologists were calculated. Results For the 917 CT images, ResNet50 presented high accuracy and sensitivity for diagnosing malignant parotid tumors; the accuracy, sensitivity, specificity, and AUC were 90.8%, 91.3%, 90.4%, and 0.96, respectively. For the 283 tumors, the accuracy, sensitivity, and specificity of ResNet50 (after voting) were 92.3%, 93.5% and 91.2%, respectively. Conclusion ResNet50 presented high sensitivity in distinguishing malignant from benign parotid tumors on plain CT images; this made it a promising auxiliary diagnostic method to screen malignant parotid tumors.
Collapse
Affiliation(s)
- Ziyang Hu
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, China
| | - Baixin Wang
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Xiao Pan
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, China
| | - Dantong Cao
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, China
| | - Antian Gao
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, China
| | - Xudong Yang
- Department of Oral and Maxillofacial Surgery, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, China
- *Correspondence: Zitong Lin, ; Ying Chen, ; Xudong Yang,
| | - Ying Chen
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
- *Correspondence: Zitong Lin, ; Ying Chen, ; Xudong Yang,
| | - Zitong Lin
- Department of Dentomaxillofacial Radiology, Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, China
- *Correspondence: Zitong Lin, ; Ying Chen, ; Xudong Yang,
| |
Collapse
|
36
|
Machine learning-based radiomics for histological classification of parotid tumors using morphological MRI: a comparative study. Eur Radiol 2022; 32:8099-8110. [PMID: 35748897 DOI: 10.1007/s00330-022-08943-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Revised: 05/30/2022] [Accepted: 06/02/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES To evaluate the effectiveness of machine learning models based on morphological magnetic resonance imaging (MRI) radiomics in the classification of parotid tumors. METHODS In total, 298 patients with parotid tumors were randomly assigned to a training and test set at a ratio of 7:3. Radiomics features were extracted from the morphological MRI images and screened using the Select K Best and LASSO algorithm. Three-step machine learning models with XGBoost, SVM, and DT algorithms were developed to classify the parotid neoplasms into four subtypes. The ROC curve was used to measure the performance in each step. Diagnostic confusion matrices of these models were calculated for the test cohort and compared with those of the radiologists. RESULTS Six, twelve, and eight optimal features were selected in each step of the three-step process, respectively. XGBoost produced the highest area under the curve (AUC) for all three steps in the training cohort (0.857, 0.882, and 0.908, respectively), and for the first step in the test cohort (0.826), but produced slightly lower AUCs than SVM in the latter two steps in the test cohort (0.817 vs. 0.833, and 0.789 vs. 0.821, respectively). The total accuracies of XGBoost and SVM in the confusion matrices (70.8% and 59.6%) outperformed those of DT and the radiologist (46.1% and 49.2%). CONCLUSION This study demonstrated that machine learning models based on morphological MRI radiomics might be an assistive tool for parotid tumor classification, especially for preliminary screening in absence of more advanced scanning sequences, such as DWI. KEY POINTS • Machine learning algorithms combined with morphological MRI radiomics could be useful in the preliminary classification of parotid tumors. • XGBoost algorithm performed better than SVM and DT in subtype differentiation of parotid tumors, while DT seemed to have a poor validation performance. • Using morphological MRI only, the XGBoost and SVM algorithms outperformed radiologists in the four-type classification task for parotid tumors, thus making these models a useful assistant diagnostic tool in clinical practice.
Collapse
|
37
|
Wen B, Zhang Z, Zhu J, Liu L, Li Y, Huang H, Zhang Y, Cheng J. Apparent Diffusion Coefficient Map–Based Radiomics Features for Differential Diagnosis of Pleomorphic Adenomas and Warthin Tumors From Malignant Tumors. Front Oncol 2022; 12:830496. [PMID: 35747827 PMCID: PMC9210443 DOI: 10.3389/fonc.2022.830496] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 04/25/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeThe magnetic resonance imaging (MRI) findings may overlap due to the complex content of parotid gland tumors and the differentiation level of malignant tumor (MT); consequently, patients may undergo diagnostic lobectomy. This study assessed whether radiomics features could noninvasively stratify parotid gland tumors accurately based on apparent diffusion coefficient (ADC) maps.MethodsThis study examined diffusion-weighted imaging (DWI) obtained with echo planar imaging sequences. Eighty-eight benign tumors (BTs) [54 pleomorphic adenomas (PAs) and 34 Warthin tumors (WTs)] and 42 MTs of the parotid gland were enrolled. Each case was randomly divided into training and testing cohorts at a ratio of 7:3 and then was compared with each other, respectively. ADC maps were digitally transferred to ITK SNAP (www.itksnap.org). The region of interest (ROI) was manually drawn around the whole tumor margin on each slice of ADC maps. After feature extraction, the Synthetic Minority Oversampling TEchnique (SMOTE) was used to remove the unbalance of the training dataset. Then, we applied the normalization process to the feature matrix. To reduce the similarity of each feature pair, we calculated the Pearson correlation coefficient (PCC) value of each feature pair and eliminated one of them if the PCC value was larger than 0.95. Then, recursive feature elimination (RFE) was used to process feature selection. After that, we used linear discriminant analysis (LDA) as the classifier. Receiver operating characteristic (ROC) curve analysis was used to evaluate the diagnostic performance of the ADC.ResultsThe LDA model based on 13, 8, 3, and 1 features can get the highest area under the ROC curve (AUC) in differentiating BT from MT, PA from WT, PA from MT, and WT from MT on the validation dataset, respectively. Accordingly, the AUC and the accuracy of the model on the testing set achieve 0.7637 and 73.17%, 0.925 and 92.31%, 0.8077 and 75.86%, and 0.5923 and 65.22%, respectively.ConclusionThe ADC-based radiomics features may be used to assist clinicians for differential diagnosis of PA and WT from MTs.
Collapse
Affiliation(s)
- Baohong Wen
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Zanxia Zhang
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jing Zhu
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Liang Liu
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yinhua Li
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Haoyu Huang
- Advanced Technical Support, Philips Healthcare, Shanghai, China
| | - Yong Zhang
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jingliang Cheng
- Department of MRI, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
- *Correspondence: Jingliang Cheng,
| |
Collapse
|
38
|
Deep learning model developed by multiparametric MRI in differential diagnosis of parotid gland tumors. Eur Arch Otorhinolaryngol 2022; 279:5389-5399. [PMID: 35596805 DOI: 10.1007/s00405-022-07455-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 05/16/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE To create a new artificial intelligence approach based on deep learning (DL) from multiparametric MRI in the differential diagnosis of common parotid tumors. METHODS Parotid tumors were classified using the InceptionResNetV2 DL model and majority voting approach with MRI images of 123 patients. The study was conducted in three stages. At stage I, the classification of the control, pleomorphic adenoma, Warthin tumor and malignant tumor (MT) groups was examined, and two approaches in which MRI sequences were given in combined and non-combined forms were established. At stage II, the classification of the benign tumor, MT and control groups was made. At stage III, patients with a tumor in the parotid gland and those with a healthy parotid gland were classified. RESULTS A stage I, the accuracy value for classification in the non-combined and combined approaches was 86.43% and 92.86%, respectively. This value at stage II and stage III was found respectively as 92.14% and 99.29%. CONCLUSIONS The approach presented in this study classifies parotid tumors automatically and with high accuracy using DL models.
Collapse
|
39
|
Juan CJ, Huang TY, Liu YJ, Shen WC, Wang CW, Hsu K, Shin N, Chang RF. Improving diagnosing performance for malignant parotid gland tumors using machine learning with multifeatures based on diffusion-weighted magnetic resonance imaging. NMR IN BIOMEDICINE 2022; 35:e4642. [PMID: 34738671 DOI: 10.1002/nbm.4642] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 09/18/2021] [Accepted: 10/10/2021] [Indexed: 06/13/2023]
Abstract
In this study, the performance of machine learning in classifying parotid gland tumors based on diffusion-related features obtained from the parotid gland tumor, the peritumor parotid gland, and the contralateral parotid gland was evaluated. Seventy-eight patients participated in this study and underwent magnetic resonance diffusion-weighted imaging. Three regions of interest, including the parotid gland tumor, the peritumor parotid gland, and the contralateral parotid gland, were manually contoured for 92 tumors, including 20 malignant tumors (MTs), 42 Warthin tumors (WTs), and 30 pleomorphic adenomas (PMAs). We recorded multiple apparent diffusion coefficient (ADC) features and applied a machine-learning method with the features to classify the three types of tumors. With only mean ADC of tumors, the area under the curve of the classification model was 0.63, 0.85, and 0.87 for MTs, WTs, and PMAs, respectively. The performance metrics were improved to 0.81, 0.89, and 0.92, respectively, with multiple features. Apart from the ADC features of parotid gland tumor, the features of the peritumor and contralateral parotid glands proved advantageous for tumor classification. Combining machine learning and multiple features provides excellent discrimination of tumor types and can be a practical tool in the clinical diagnosis of parotid gland tumors.
Collapse
Affiliation(s)
- Chun-Jung Juan
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, Republic of China
- Department of Medical Imaging, China Medical University Hsinchu Hospital, Hsinchu, Taiwan, Republic of China
- Department of Radiology, School of Medicine, China Medical University, Taichung, Taiwan, Republic of China
- Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan, Republic of China
| | - Teng-Yi Huang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, Republic of China
| | - Yi-Jui Liu
- Department of Automatic Control Engineering, Feng Chia University, Taichung, Taiwan, Republic of China
| | - Wu-Chung Shen
- Department of Radiology, School of Medicine, China Medical University, Taichung, Taiwan, Republic of China
- Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan, Republic of China
| | - Chih-Wei Wang
- Department of Radiology, Tri-Service General Hospital and National Defense Medical Center, Taipei, Taiwan, Republic of China
| | - Kang Hsu
- Department of Dentistry, Tri-Service General Hospital, Taipei, Taiwan, Republic of China
| | - Nieh Shin
- Department of Pathology and Graduate Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan, Republic of China
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, Republic of China
| |
Collapse
|
40
|
Radiomics and deep learning approach to the differential diagnosis of parotid gland tumors. Curr Opin Otolaryngol Head Neck Surg 2021; 30:107-113. [PMID: 34907957 DOI: 10.1097/moo.0000000000000782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW Advances in computer technology and growing expectations from computer-aided systems have led to the evolution of artificial intelligence into subsets, such as deep learning and radiomics, and the use of these systems is revolutionizing modern radiological diagnosis. In this review, artificial intelligence applications developed with radiomics and deep learning methods in the differential diagnosis of parotid gland tumors (PGTs) will be overviewed. RECENT FINDINGS The development of artificial intelligence models has opened new scenarios owing to the possibility of assessing features of medical images that usually are not evaluated by physicians. Radiomics and deep learning models come to the forefront in computer-aided diagnosis of medical images, even though their applications in the differential diagnosis of PGTs have been limited because of the scarcity of data sets related to these rare neoplasms. Nevertheless, recent studies have shown that artificial intelligence tools can classify common PGTs with reasonable accuracy. SUMMARY All studies aimed at the differential diagnosis of benign vs. malignant PGTs or the identification of the commonest PGT subtypes were identified, and five studies were found that focused on deep learning-based differential diagnosis of PGTs. Data sets were created in three of these studies with MRI and in two with computed tomography (CT). Additional seven studies were related to radiomics. Of these, four were on MRI-based radiomics, two on CT-based radiomics, and one compared MRI and CT-based radiomics in the same patients.
Collapse
|
41
|
Mori Y, Yokota H, Hoshino I, Iwatate Y, Wakamatsu K, Uno T, Suyari H. Deep learning-based gene selection in comprehensive gene analysis in pancreatic cancer. Sci Rep 2021; 11:16521. [PMID: 34389782 PMCID: PMC8363643 DOI: 10.1038/s41598-021-95969-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/29/2021] [Indexed: 12/14/2022] Open
Abstract
The selection of genes that are important for obtaining gene expression data is challenging. Here, we developed a deep learning-based feature selection method suitable for gene selection. Our novel deep learning model includes an additional feature-selection layer. After model training, the units in this layer with high weights correspond to the genes that worked effectively in the processing of the networks. Cancer tissue samples and adjacent normal pancreatic tissue samples were collected from 13 patients with pancreatic ductal adenocarcinoma during surgery and subsequently frozen. After processing, gene expression data were extracted from the specimens using RNA sequencing. Task 1 for the model training was to discriminate between cancerous and normal pancreatic tissue in six patients. Task 2 was to discriminate between patients with pancreatic cancer (n = 13) who survived for more than one year after surgery. The most frequently selected genes were ACACB, ADAMTS6, NCAM1, and CADPS in Task 1, and CD1D, PLA2G16, DACH1, and SOWAHA in Task 2. According to The Cancer Genome Atlas dataset, these genes are all prognostic factors for pancreatic cancer. Thus, the feasibility of using our deep learning-based method for the selection of genes associated with pancreatic cancer development and prognosis was confirmed.
Collapse
Affiliation(s)
- Yasukuni Mori
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan.
| | - Hajime Yokota
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8670, Japan
| | - Isamu Hoshino
- Division of Gastroenterological Surgery, Chiba Cancer Center, 666-2 Nitona-cho, Chuo-ku, Chiba-shi, Chiba, 260-8717, Japan
| | - Yosuke Iwatate
- Division of Hepato-Biliary-Pancreatic Surgery, Chiba Cancer Center, 666-2 Nitona-cho, Chuo-ku, Chiba-shi, Chiba, 260-8717, Japan
| | - Kohei Wakamatsu
- Media Data Tech Studio, CyberAgent, Inc., 13F Akihabara Daibiru, 1-18-13 Sotokanda, Chiyoda-ku, Tokyo, 101-0021, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8670, Japan
| | - Hiroki Suyari
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan
| |
Collapse
|
42
|
Piludu F, Marzi S, Ravanelli M, Pellini R, Covello R, Terrenato I, Farina D, Campora R, Ferrazzoli V, Vidiri A. MRI-Based Radiomics to Differentiate between Benign and Malignant Parotid Tumors With External Validation. Front Oncol 2021; 11:656918. [PMID: 33987092 PMCID: PMC8111169 DOI: 10.3389/fonc.2021.656918] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/08/2021] [Indexed: 12/23/2022] Open
Abstract
Background The differentiation between benign and malignant parotid lesions is crucial to defining the treatment plan, which highly depends on the tumor histology. We aimed to evaluate the role of MRI-based radiomics using both T2-weighted (T2-w) images and Apparent Diffusion Coefficient (ADC) maps in the differentiation of parotid lesions, in order to develop predictive models with an external validation cohort. Materials and Methods A sample of 69 untreated parotid lesions was evaluated retrospectively, including 37 benign (of which 13 were Warthin’s tumors) and 32 malignant tumors. The patient population was divided into three groups: benign lesions (24 cases), Warthin’s lesions (13 cases), and malignant lesions (32 cases), which were compared in pairs. First- and second-order features were derived for each lesion. Margins and contrast enhancement patterns (CE) were qualitatively assessed. The model with the final feature set was achieved using the support vector machine binary classification algorithm. Results Models for discriminating between Warthin’s and malignant tumors, benign and Warthin’s tumors and benign and malignant tumors had an accuracy of 86.7%, 91.9% and 80.4%, respectively. After the feature selection process, four parameters for each model were used, including histogram-based features from ADC and T2-w images, shape-based features and types of margins and/or CE. Comparable accuracies were obtained after validation with the external cohort. Conclusions Radiomic analysis of ADC, T2-w images, and qualitative scores evaluating margins and CE allowed us to obtain good to excellent diagnostic accuracies in differentiating parotid lesions, which were confirmed with an external validation cohort.
Collapse
Affiliation(s)
- Francesca Piludu
- Radiology and Diagnostic Imaging Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Simona Marzi
- Medical Physics Laboratory, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Marco Ravanelli
- Department of Radiology, University of Brescia, Brescia, Italy
| | - Raul Pellini
- Department of Otolaryngology & Head and Neck Surgery, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Renato Covello
- Department of Pathology, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Irene Terrenato
- Biostatistics-Scientific Direction, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| | - Davide Farina
- Department of Radiology, University of Brescia, Brescia, Italy
| | | | - Valentina Ferrazzoli
- Department of Biomedicine and Prevention, University of Rome "Tor Vergata", Rome, Italy
| | - Antonello Vidiri
- Radiology and Diagnostic Imaging Department, IRCCS Regina Elena National Cancer Institute, Rome, Italy
| |
Collapse
|
43
|
Song LL, Chen SJ, Chen W, Shi Z, Wang XD, Song LN, Chen DS. Radiomic model for differentiating parotid pleomorphic adenoma from parotid adenolymphoma based on MRI images. BMC Med Imaging 2021; 21:54. [PMID: 33743615 PMCID: PMC7981906 DOI: 10.1186/s12880-021-00581-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 03/07/2021] [Indexed: 01/04/2023] Open
Abstract
Background Distinguishing parotid pleomorphic adenoma (PPA) from parotid adenolymphoma (PA) is important for precision treatment, but there is a lack of readily available diagnostic methods. In this study, we aimed to explore the diagnostic value of radiomic signatures based on magnetic resonance imaging (MRI) for PPA and PA. Methods The clinical characteristic and imaging data were retrospectively collected from 252 cases (126 cases in the training cohort and 76 patients in the validation cohort) in this study. Radiomic features were extracted from MRI scans, including T1-weighted imaging (T1WI) sequences and T2-weighted imaging (T2WI) sequences. The radiomic features from three sequences (T1WI, T2WI and T1WI combined with T2WI) were selected using univariate analysis, LASSO correlation and Spearman correlation. Then, we built six quantitative radiomic models using the selected features through two machine learning methods (multivariable logistic regression, MLR, and support vector machine, SVM). The performances of the six radiomic models were assessed and the diagnostic efficacies of the ideal T1-2WI radiomic model and the clinical model were compared. Results The T1-2WI radiomic model using MLR showed optimal discriminatory ability (accuracy = 0.87 and 0.86, F-1 score = 0.88 and 0.86, sensitivity = 0.90 and 0.88, specificity = 0.82 and 0.80, positive predictive value = 0.86 and 0.84, negative predictive value = 0.86 and 0.84 in the training and validation cohorts, respectively) and its calibration was observed to be good (p > 0.05). The area under the curve (AUC) of the T1-2WI radiomic model was significantly better than that of the clinical model for both the training (0.95 vs. 0.67, p < 0.001) and validation (0.90 vs. 0.68, p = 0.001) cohorts. Conclusions The T1-2WI radiomic model in our study is complementary to the current knowledge of differential diagnosis for PPA and PA. Supplementary Information The online version contains supplementary material available at 10.1186/s12880-021-00581-9.
Collapse
Affiliation(s)
- Le-le Song
- The Department of Radiology, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang, Henan, China
| | - Shun-Jun Chen
- The Department of Ultrasound, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang, Henan, China
| | - Wang Chen
- The Department of Radiology, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang, Henan, China
| | - Zhan Shi
- The Department of Radiology, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang, Henan, China
| | - Xiao-Dong Wang
- The Department of Radiology, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang, Henan, China
| | - Li-Na Song
- Liver Cancer Institute, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dian-Sen Chen
- The Department of Radiology, the First Affiliated Hospital of Henan University of Science and Technology, Luoyang, Henan, China.
| |
Collapse
|