1
|
Tareke TW, Leclerc S, Vuillemin C, Buffier P, Crevisy E, Nguyen A, Monnier Meteau MP, Legris P, Angiolini S, Lalande A. Automatic Classification of Nodules from 2D Ultrasound Images Using Deep Learning Networks. J Imaging 2024; 10:203. [PMID: 39194992 DOI: 10.3390/jimaging10080203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 08/14/2024] [Accepted: 08/19/2024] [Indexed: 08/29/2024] Open
Abstract
OBJECTIVE In clinical practice, thyroid nodules are typically visually evaluated by expert physicians using 2D ultrasound images. Based on their assessment, a fine needle aspiration (FNA) may be recommended. However, visually classifying thyroid nodules from ultrasound images may lead to unnecessary fine needle aspirations for patients. The aim of this study is to develop an automatic thyroid ultrasound image classification system to prevent unnecessary FNAs. METHODS An automatic computer-aided artificial intelligence system is proposed for classifying thyroid nodules using a fine-tuned deep learning model based on the DenseNet architecture, which incorporates an attention module. The dataset comprises 591 thyroid nodule images categorized based on the Bethesda score. Thyroid nodules are classified as either requiring FNA or not. The challenges encountered in this task include managing variability in image quality, addressing the presence of artifacts in ultrasound image datasets, tackling class imbalance, and ensuring model interpretability. We employed techniques such as data augmentation, class weighting, and gradient-weighted class activation maps (Grad-CAM) to enhance model performance and provide insights into decision making. RESULTS Our approach achieved excellent results with an average accuracy of 0.94, F1-score of 0.93, and sensitivity of 0.96. The use of Grad-CAM gives insights on the decision making and then reinforce the reliability of the binary classification for the end-user perspective. CONCLUSIONS We propose a deep learning architecture that effectively classifies thyroid nodules as requiring FNA or not from ultrasound images. Despite challenges related to image variability, class imbalance, and interpretability, our method demonstrated a high classification accuracy with minimal false negatives, showing its potential to reduce unnecessary FNAs in clinical settings.
Collapse
Affiliation(s)
- Tewele W Tareke
- ICMUB Laboratory, UMR CNRS 6302, University of Burgundy, 7 Bld Jeanne d'Arc, 21000 Dijon, France
| | - Sarah Leclerc
- ICMUB Laboratory, UMR CNRS 6302, University of Burgundy, 7 Bld Jeanne d'Arc, 21000 Dijon, France
| | | | - Perrine Buffier
- Department of Endocrinology-Diabetology, University Hospital, 21000 Dijon, France
| | - Elodie Crevisy
- Department of Endocrinology-Diabetology, University Hospital, 21000 Dijon, France
| | - Amandine Nguyen
- Department of Endocrinology-Diabetology, University Hospital, 21000 Dijon, France
| | | | - Pauline Legris
- Department of Endocrinology-Diabetology, University Hospital, 21000 Dijon, France
| | - Serge Angiolini
- Medical Imaging Department, Hospital of Bastia, 20600 Bastia, France
| | - Alain Lalande
- ICMUB Laboratory, UMR CNRS 6302, University of Burgundy, 7 Bld Jeanne d'Arc, 21000 Dijon, France
- Department of Medical Imaging, University Hospital of Dijon, 21000 Dijon, France
| |
Collapse
|
2
|
Vahdati S, Khosravi B, Robinson KA, Rouzrokh P, Moassefi M, Akkus Z, Erickson BJ. A Multi-View Deep Learning Model for Thyroid Nodules Detection and Characterization in Ultrasound Imaging. Bioengineering (Basel) 2024; 11:648. [PMID: 39061730 PMCID: PMC11273835 DOI: 10.3390/bioengineering11070648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 05/27/2024] [Accepted: 06/13/2024] [Indexed: 07/28/2024] Open
Abstract
Thyroid Ultrasound (US) is the primary method to evaluate thyroid nodules. Deep learning (DL) has been playing a significant role in evaluating thyroid cancer. We propose a DL-based pipeline to detect and classify thyroid nodules into benign or malignant groups relying on two views of US imaging. Transverse and longitudinal US images of thyroid nodules from 983 patients were collected retrospectively. Eighty-one cases were held out as a testing set, and the rest of the data were used in five-fold cross-validation (CV). Two You Look Only Once (YOLO) v5 models were trained to detect nodules and classify them. For each view, five models were developed during the CV, which was ensembled by using non-max suppression (NMS) to boost their collective generalizability. An extreme gradient boosting (XGBoost) model was trained on the outputs of the ensembled models for both views to yield a final prediction of malignancy for each nodule. The test set was evaluated by an expert radiologist using the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS). The ensemble models for each view achieved a mAP0.5 of 0.797 (transverse) and 0.716 (longitudinal). The whole pipeline reached an AUROC of 0.84 (CI 95%: 0.75-0.91) with sensitivity and specificity of 84% and 63%, respectively, while the ACR-TIRADS evaluation of the same set had a sensitivity of 76% and specificity of 34% (p-value = 0.003). Our proposed work demonstrated the potential possibility of a deep learning model to achieve diagnostic performance for thyroid nodule evaluation.
Collapse
Affiliation(s)
- Sanaz Vahdati
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
| | - Bardia Khosravi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
| | - Kathryn A. Robinson
- Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
| | - Pouria Rouzrokh
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
| | - Mana Moassefi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
| | - Zeynettin Akkus
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Bradley J. Erickson
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN 55905, USA
| |
Collapse
|
3
|
Zhuang L, Ivezic V, Feng J, Shen C, Radhachandran A, Sant V, Patel M, Masamed R, Arnold C, Speier W. Patient-level thyroid cancer classification using attention multiple instance learning on fused multi-scale ultrasound image features. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2024; 2023:1344-1353. [PMID: 38222341 PMCID: PMC10785838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
For patients with thyroid nodules, the ability to detect and diagnose a malignant nodule is the key to creating an appropriate treatment plan. However, assessments of ultrasound images do not accurately represent malignancy, and often require a biopsy to confirm the diagnosis. Deep learning techniques can classify thyroid nodules from ultrasound images, but current methods depend on manually annotated nodule segmentations. Furthermore, the heterogeneity in the level of magnification across ultrasound images presents a significant obstacle to existing methods. We developed a multi-scale, attention-based multiple-instance learning model which fuses both global and local features of different ultrasound frames to achieve patient-level malignancy classification. Our model demonstrates improved performance with an AUROC of 0.785 (p<0.05) and AUPRC of 0.539, significantly surpassing the baseline model trained on clinical features with an AUROC of 0.667 and AUPRC of 0.444. Improved classification performance better triages the need for biopsy.
Collapse
Affiliation(s)
- Luoting Zhuang
- Medical Informatics Home Area, University of California, Los Angeles, CA, USA
| | - Vedrana Ivezic
- Medical Informatics Home Area, University of California, Los Angeles, CA, USA
| | - Jeffrey Feng
- Medical Informatics Home Area, University of California, Los Angeles, CA, USA
| | - Chushu Shen
- Department of Bioengineering, University of California, Los Angeles, CA, USA
| | | | - Vivek Sant
- Section of Endocrine Surgery, Department of Surgery, University of California, Los Angeles, CA, USA
| | - Maitraya Patel
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
| | - Rinat Masamed
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
| | - Corey Arnold
- Medical Informatics Home Area, University of California, Los Angeles, CA, USA
- Department of Bioengineering, University of California, Los Angeles, CA, USA
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
| | - William Speier
- Medical Informatics Home Area, University of California, Los Angeles, CA, USA
- Department of Bioengineering, University of California, Los Angeles, CA, USA
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
| |
Collapse
|
4
|
Zhang N, Liu J, Jin Y, Duan W, Wu Z, Cai Z, Wu M. An adaptive multi-modal hybrid model for classifying thyroid nodules by combining ultrasound and infrared thermal images. BMC Bioinformatics 2023; 24:315. [PMID: 37598159 PMCID: PMC10440038 DOI: 10.1186/s12859-023-05446-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 08/15/2023] [Indexed: 08/21/2023] Open
Abstract
BACKGROUND Two types of non-invasive, radiation-free, and inexpensive imaging technologies that are widely employed in medical applications are ultrasound (US) and infrared thermography (IRT). The ultrasound image obtained by ultrasound imaging primarily expresses the size, shape, contour boundary, echo, and other morphological information of the lesion, while the infrared thermal image obtained by infrared thermography imaging primarily describes its thermodynamic function information. Although distinguishing between benign and malignant thyroid nodules requires both morphological and functional information, present deep learning models are only based on US images, making it possible that some malignant nodules with insignificant morphological changes but significant functional changes will go undetected. RESULTS Given the US and IRT images present thyroid nodules through distinct modalities, we proposed an Adaptive multi-modal Hybrid (AmmH) classification model that can leverage the amalgamation of these two image types to achieve superior classification performance. The AmmH approach involves the construction of a hybrid single-modal encoder module for each modal data, which facilitates the extraction of both local and global features by integrating a CNN module and a Transformer module. The extracted features from the two modalities are then weighted adaptively using an adaptive modality-weight generation network and fused using an adaptive cross-modal encoder module. The fused features are subsequently utilized for the classification of thyroid nodules through the use of MLP. On the collected dataset, our AmmH model respectively achieved 97.17% and 97.38% of F1 and F2 scores, which significantly outperformed the single-modal models. The results of four ablation experiments further show the superiority of our proposed method. CONCLUSIONS The proposed multi-modal model extracts features from various modal images, thereby enhancing the comprehensiveness of thyroid nodules descriptions. The adaptive modality-weight generation network enables adaptive attention to different modalities, facilitating the fusion of features using adaptive weights through the adaptive cross-modal encoder. Consequently, the model has demonstrated promising classification performance, indicating its potential as a non-invasive, radiation-free, and cost-effective screening tool for distinguishing between benign and malignant thyroid nodules. The source code is available at https://github.com/wuliZN2020/AmmH .
Collapse
Affiliation(s)
- Na Zhang
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Juan Liu
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Yu Jin
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Wensi Duan
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Ziling Wu
- Department of Ultrasound, Zhongnan Hospital, Wuhan University, Wuhan, 430072 China
| | - Zhaohui Cai
- Institute of Artificial Intelligence, School of Computer Science, Wuhan University, Wuhan, 430072 China
| | - Meng Wu
- Department of Ultrasound, Zhongnan Hospital, Wuhan University, Wuhan, 430072 China
| |
Collapse
|
5
|
Li D, Li X, Li S, Qi M, Sun X, Hu G. Relationship between the deep features of the full-scan pathological map of mucinous gastric carcinoma and related genes based on deep learning. Heliyon 2023; 9:e14374. [PMID: 36942252 PMCID: PMC10023952 DOI: 10.1016/j.heliyon.2023.e14374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 02/28/2023] [Accepted: 03/02/2023] [Indexed: 03/11/2023] Open
Abstract
Background Long-term differential expression of disease-associated genes is a crucial driver of pathological changes in mucinous gastric carcinoma. Therefore, there should be a correlation between depth features extracted from pathology-based full-scan images using deep learning and disease-associated gene expression. This study tried to provides preliminary evidence that long-term differentially expressed (disease-associated) genes lead to subtle changes in disease pathology by exploring their correlation, and offer a new ideas for precise analysis of pathomics and combined analysis of pathomics and genomics. Methods Full pathological scans, gene sequencing data, and clinical data of patients with mucinous gastric carcinoma were downloaded from TCGA data. The VGG-16 network architecture was used to construct a binary classification model to explore the potential of VGG-16 applications and extract the deep features of the pathology-based full-scan map. Differential gene expression analysis was performed and a protein-protein interaction network was constructed to screen disease-related core genes. Differential, Lasso regression, and extensive correlation analyses were used to screen for valuable deep features. Finally, a correlation analysis was used to determine whether there was a correlation between valuable deep features and disease-related core genes. Result The accuracy of the binary classification model was 0.775 ± 0.129. A total of 24 disease-related core genes were screened, including ASPM, AURKA, AURKB, BUB1, BUB1B, CCNA2, CCNB1, CCNB2, CDCA8, CDK1, CENPF, DLGAP5, KIF11, KIF20A, KIF2C, KIF4A, MELK, PBK, RRM2, TOP2A, TPX2, TTK, UBE2C, and ZWINT. In addition, differential, Lasso regression, and extensive correlation analyses were used to screen eight valuable deep features, including features 51, 106, 109, 118, 257, 282, 326, and 487. Finally, the results of the correlation analysis suggested that valuable deep features were either positively or negatively correlated with core gene expression. Conclusion The preliminary results of this study support our hypotheses. Deep learning may be an important bridge for the joint analysis of pathomics and genomics and provides preliminary evidence for long-term abnormal expression of genes leading to subtle changes in pathology.
Collapse
Affiliation(s)
- Ding Li
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaoyuan Li
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Shifang Li
- Department of Neurosurgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Mengmeng Qi
- Department of Endocrinology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaowei Sun
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Guojie Hu
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
6
|
Sun J, Wu B, Zhao T, Gao L, Xie K, Lin T, Sui J, Li X, Wu X, Ni X. Classification for thyroid nodule using ViT with contrastive learning in ultrasound images. Comput Biol Med 2023; 152:106444. [PMID: 36565481 DOI: 10.1016/j.compbiomed.2022.106444] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 12/01/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
The lack of representative features between benign nodules, especially level 3 of Thyroid Imaging Reporting and Data System (TI-RADS), and malignant nodules limits diagnostic accuracy, leading to inconsistent interpretation, overdiagnosis, and unnecessary biopsies. We propose a Vision-Transformer-based (ViT) thyroid nodule classification model using contrast learning, called TC-ViT, to improve accuracy of diagnosis and specificity of biopsy recommendations. ViT can explore the global features of thyroid nodules well. Nodule images are used as ROI to enhance the local features of the ViT. Contrast learning can minimize the representation distance between nodules of the same category, enhance the representation consistency of global and local features, and achieve accurate diagnosis of TI-RADS 3 or malignant nodules. The test results achieve an accuracy of 86.9%. The evaluation metrics show that the network outperforms other classical deep learning-based networks in terms of classification performance. TC-ViT can achieve automatic classification of TI-RADS 3 and malignant nodules on ultrasound images. It can also be used as a key step in computer-aided diagnosis for comprehensive analysis and accurate diagnosis. The code will be available at https://github.com/Jiawei217/TC-ViT.
Collapse
Affiliation(s)
- Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Bobo Wu
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Tong Zhao
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Liugang Gao
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Tao Lin
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Jianfeng Sui
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Xiaoqin Li
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China
| | - Xiaojin Wu
- Oncology Department, Xuzhou NO.1 People's Hospital, Xuzhou 221000, China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China.
| |
Collapse
|
7
|
Tao Y, Yu Y, Wu T, Xu X, Dai Q, Kong H, Zhang L, Yu W, Leng X, Qiu W, Tian J. Deep learning for the diagnosis of suspicious thyroid nodules based on multimodal ultrasound images. Front Oncol 2022; 12:1012724. [PMID: 36425556 PMCID: PMC9680169 DOI: 10.3389/fonc.2022.1012724] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/18/2022] [Indexed: 09/07/2023] Open
Abstract
OBJECTIVES This study aimed to differentially diagnose thyroid nodules (TNs) of Thyroid Imaging Reporting and Data System (TI-RADS) 3-5 categories using a deep learning (DL) model based on multimodal ultrasound (US) images and explore its auxiliary role for radiologists with varying degrees of experience. METHODS Preoperative multimodal US images of 1,138 TNs of TI-RADS 3-5 categories were randomly divided into a training set (n = 728), a validation set (n = 182), and a test set (n = 228) in a 4:1:1.25 ratio. Grayscale US (GSU), color Doppler flow imaging (CDFI), strain elastography (SE), and region of interest mask (Mask) images were acquired in both transverse and longitudinal sections, all of which were confirmed by pathology. In this study, fivefold cross-validation was used to evaluate the performance of the proposed DL model. The diagnostic performance of the mature DL model and radiologists in the test set was compared, and whether DL could assist radiologists in improving diagnostic performance was verified. Specificity, sensitivity, accuracy, positive predictive value, negative predictive value, and area under the receiver operating characteristics curves (AUC) were obtained. RESULTS The AUCs of DL in the differentiation of TNs were 0.858 based on (GSU + SE), 0.909 based on (GSU + CDFI), 0.906 based on (GSU + CDFI + SE), and 0.881 based (GSU + Mask), which were superior to that of 0.825-based single GSU (p = 0.014, p< 0.001, p< 0.001, and p = 0.002, respectively). The highest AUC of 0.928 was achieved by DL based on (G + C + E + M)US, the highest specificity of 89.5% was achieved by (G + C + E)US, and the highest accuracy of 86.2% and sensitivity of 86.9% were achieved by DL based on (G + C + M)US. With DL assistance, the AUC of junior radiologists increased from 0.720 to 0.796 (p< 0.001), which was slightly higher than that of senior radiologists without DL assistance (0.796 vs. 0.794, p > 0.05). Senior radiologists with DL assistance exhibited higher accuracy and comparable AUC than that of DL based on GSU (83.4% vs. 78.9%, p = 0.041; 0.822 vs. 0.825, p = 0.512). However, the AUC of DL based on multimodal US images was significantly higher than that based on visual diagnosis by radiologists (p< 0.05). CONCLUSION The DL models based on multimodal US images showed exceptional performance in the differential diagnosis of suspicious TNs, effectively increased the diagnostic efficacy of TN evaluations by junior radiologists, and provided an objective assessment for the clinical and surgical management phases that follow.
Collapse
Affiliation(s)
- Yi Tao
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yanyan Yu
- The National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Tong Wu
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiangli Xu
- Department of Ultrasound, The Second Hospital of Harbin, Harbin, China
| | - Quan Dai
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Hanqing Kong
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Lei Zhang
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Weidong Yu
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaoping Leng
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Weibao Qiu
- Shenzhen Key Laboratory of Ultrasound Imaging and Therapy, Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
8
|
Zhang R, Yi G, Pu S, Wang Q, Sun C, Wang Q, Feng L, Liu X, Li Z, Niu L. Deep learning based on ultrasound to differentiate pathologically proven atypical and typical medullary thyroid carcinoma from follicular thyroid adenoma. Eur J Radiol 2022; 156:110547. [PMID: 36201930 DOI: 10.1016/j.ejrad.2022.110547] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 08/22/2022] [Accepted: 09/26/2022] [Indexed: 11/24/2022]
Abstract
OBJECTIVES To investigate the feasibility and value of deep learning based on grayscale ultrasonography in the differentiation of pathologically proven atypical and typical medullary thyroid carcinoma (MTC) from follicular thyroid adenoma (FTA). METHODS The preoperative 770 ultrasound images consisted of 354 MTCs (66% were typical MTCs with a high suspicion sonographic pattern, 34% were atypical MTCs with a suspicion pattern of intermediate or less) and 416 FTAs. All images were delineated manually by a senior sonographer to achieve the regions of interest. Two deep neural networks of ResNet-34 and ResNet-18 were performed on the training set (n = 690). The test data set (n = 80) was subsequently evaluated by the two models and two sonographers, their diagnostic performances and misdiagnosis lesions were compared and analyzed. RESULTS The ResNet-34 model shows higher diagnostic ability than the junior sonographer with an area under the receiver operating curve of 0.992 (95% CI: 0.840-0.970)versus 0.754 (95% CI:0.645-0.843). Moreover, 12 of 16 atypical MTCs were successfully identified by the ResNet-34, which is significantly better than the senior and junior sonographer, suggesting that these patients could benefit from timely serological examination and surgical strategy at an earlier stage. CONCLUSION Deep learning to differentiate MTC from FTA on grayscale ultrasound may be a useful diagnostic support tool, especially in atypical MTC and FTA. Moreover, the computing time of deep learning is short, which will help to incorporate it into real-time ultrasound diagnosis.
Collapse
Affiliation(s)
- Rui Zhang
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Guanxiu Yi
- Beijing Laboratory of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing, China.
| | - Shunfan Pu
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qin Wang
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chao Sun
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qian Wang
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Li Feng
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xiabi Liu
- Beijing Laboratory of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing, China.
| | - Zhengjiang Li
- Department of Head and Neck Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Lijuan Niu
- Department of Ultrasound, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
9
|
Zhu PS, Zhang YR, Ren JY, Li QL, Chen M, Sang T, Li WX, Li J, Cui XW. Ultrasound-based deep learning using the VGGNet model for the differentiation of benign and malignant thyroid nodules: A meta-analysis. Front Oncol 2022; 12:944859. [PMID: 36249056 PMCID: PMC9554631 DOI: 10.3389/fonc.2022.944859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 08/19/2022] [Indexed: 12/13/2022] Open
Abstract
Objective The aim of this study was to evaluate the accuracy of deep learning using the convolutional neural network VGGNet model in distinguishing benign and malignant thyroid nodules based on ultrasound images. Methods Relevant studies were selected from PubMed, Embase, Cochrane Library, China National Knowledge Infrastructure (CNKI), and Wanfang databases, which used the deep learning-related convolutional neural network VGGNet model to classify benign and malignant thyroid nodules based on ultrasound images. Cytology and pathology were used as gold standards. Furthermore, reported eligibility and risk bias were assessed using the QUADAS-2 tool, and the diagnostic accuracy of deep learning VGGNet was analyzed with pooled sensitivity, pooled specificity, diagnostic odds ratio, and the area under the curve. Results A total of 11 studies were included in this meta-analysis. The overall estimates of sensitivity and specificity were 0.87 [95% CI (0.83, 0.91)] and 0.85 [95% CI (0.79, 0.90)], respectively. The diagnostic odds ratio was 38.79 [95% CI (22.49, 66.91)]. The area under the curve was 0.93 [95% CI (0.90, 0.95)]. No obvious publication bias was found. Conclusion Deep learning using the convolutional neural network VGGNet model based on ultrasound images performed good diagnostic efficacy in distinguishing benign and malignant thyroid nodules. Systematic Review Registration https://www.crd.york.ac.nk/prospero, identifier CRD42022336701.
Collapse
Affiliation(s)
- Pei-Shan Zhu
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Yu-Rui Zhang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jia-Yu Ren
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qiao-Li Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Ming Chen
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Tian Sang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Wen-Xiao Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jun Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China,NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China,*Correspondence: Jun Li, ; Xin-Wu Cui,
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Jun Li, ; Xin-Wu Cui,
| |
Collapse
|
10
|
Cleere EF, Davey MG, O’Neill S, Corbett M, O’Donnell JP, Hacking S, Keogh IJ, Lowery AJ, Kerin MJ. Radiomic Detection of Malignancy within Thyroid Nodules Using Ultrasonography-A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2022; 12:diagnostics12040794. [PMID: 35453841 PMCID: PMC9027085 DOI: 10.3390/diagnostics12040794] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 03/22/2022] [Accepted: 03/22/2022] [Indexed: 02/04/2023] Open
Abstract
Background: Despite investigation, 95% of thyroid nodules are ultimately benign. Radiomics is a field that uses radiological features to inform individualized patient care. We aimed to evaluate the diagnostic utility of radiomics in classifying undetermined thyroid nodules into benign and malignant using ultrasonography (US). Methods: A diagnostic test accuracy systematic review and meta-analysis was performed in accordance with PRISMA guidelines. Sensitivity, specificity, and area under curve (AUC) delineating benign and malignant lesions were recorded. Results: Seventy-five studies including 26,373 patients and 46,175 thyroid nodules met inclusion criteria. Males accounted for 24.6% of patients, while 75.4% of patients were female. Radiomics provided a pooled sensitivity of 0.87 (95% CI: 0.86−0.87) and a pooled specificity of 0.84 (95% CI: 0.84−0.85) for characterizing benign and malignant lesions. Using convolutional neural network (CNN) methods, pooled sensitivity was 0.85 (95% CI: 0.84−0.86) and pooled specificity was 0.82 (95% CI: 0.82−0.83); significantly lower than studies using non-CNN: sensitivity 0.90 (95% CI: 0.89−0.90) and specificity 0.88 (95% CI: 0.87−0.89) (p < 0.05). The diagnostic ability of radiologists and radiomics were comparable for both sensitivity (OR 0.98) and specificity (OR 0.95). Conclusions: Radiomic analysis using US provides a reproducible, reliable evaluation of undetermined thyroid nodules when compared to current best practice.
Collapse
Affiliation(s)
- Eoin F. Cleere
- The Lambe Institute for Translational Research, National University of Ireland, H91 YR71 Galway, Ireland; (M.G.D.); (A.J.L.); (M.J.K.)
- Department of Otolaryngology, Galway University Hospitals, H91 YR71 Galway, Ireland; (M.C.); (I.J.K.)
- Correspondence:
| | - Matthew G. Davey
- The Lambe Institute for Translational Research, National University of Ireland, H91 YR71 Galway, Ireland; (M.G.D.); (A.J.L.); (M.J.K.)
| | - Shane O’Neill
- Department of Breast and Endocrine Surgery, Galway University Hospitals, H91 YR71 Galway, Ireland;
| | - Mel Corbett
- Department of Otolaryngology, Galway University Hospitals, H91 YR71 Galway, Ireland; (M.C.); (I.J.K.)
| | - John P O’Donnell
- Department of Radiology, Galway University Hospitals, H91 YR71 Galway, Ireland;
| | - Sean Hacking
- Department of Pathology and Laboratory Medicine, Warren Alpert Medical School of Brown University, Providence, RI 02903, USA;
| | - Ivan J. Keogh
- Department of Otolaryngology, Galway University Hospitals, H91 YR71 Galway, Ireland; (M.C.); (I.J.K.)
| | - Aoife J. Lowery
- The Lambe Institute for Translational Research, National University of Ireland, H91 YR71 Galway, Ireland; (M.G.D.); (A.J.L.); (M.J.K.)
- Department of Breast and Endocrine Surgery, Galway University Hospitals, H91 YR71 Galway, Ireland;
| | - Michael J. Kerin
- The Lambe Institute for Translational Research, National University of Ireland, H91 YR71 Galway, Ireland; (M.G.D.); (A.J.L.); (M.J.K.)
- Department of Breast and Endocrine Surgery, Galway University Hospitals, H91 YR71 Galway, Ireland;
| |
Collapse
|
11
|
Zhao J, Zhou X, Shi G, Xiao N, Song K, Zhao J, Hao R, Li K. Semantic consistency generative adversarial network for cross-modality domain adaptation in ultrasound thyroid nodule classification. APPL INTELL 2022; 52:10369-10383. [PMID: 35039715 PMCID: PMC8754560 DOI: 10.1007/s10489-021-03025-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2021] [Indexed: 12/22/2022]
Abstract
Deep convolutional networks have been widely used for various medical image processing tasks. However, the performance of existing learning-based networks is still limited due to the lack of large training datasets. When a general deep model is directly deployed to a new dataset with heterogeneous features, the effect of domain shifts is usually ignored, and performance degradation problems occur. In this work, by designing the semantic consistency generative adversarial network (SCGAN), we propose a new multimodal domain adaptation method for medical image diagnosis. SCGAN performs cross-domain collaborative alignment of ultrasound images and domain knowledge. Specifically, we utilize a self-attention mechanism for adversarial learning between dual domains to overcome visual differences across modal data and preserve the domain invariance of the extracted semantic features. In particular, we embed nested metric learning in the semantic information space, thus enhancing the semantic consistency of cross-modal features. Furthermore, the adversarial learning of our network is guided by a discrepancy loss for encouraging the learning of semantic-level content and a regularization term for enhancing network generalization. We evaluate our method on a thyroid ultrasound image dataset for benign and malignant diagnosis of nodules. The experimental results of a comprehensive study show that the accuracy of the SCGAN method for the classification of thyroid nodules reaches 94.30%, and the AUC reaches 97.02%. These results are significantly better than the state-of-the-art methods.
Collapse
|
12
|
Liu X, He W, Zhang Y, Yao S, Cui Z. Effect of dual-convolutional neural network model fusion for Aluminum profile surface defects classification and recognition. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:997-1025. [PMID: 34903023 DOI: 10.3934/mbe.2022046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Classifying and identifying surface defects is essential during the production and use of aluminum profiles. Recently, the dual-convolutional neural network(CNN) model fusion framework has shown promising performance for defects classification and recognition. Spurred by this trend, this paper proposes an improved dual-CNN model fusion framework to classify and identify defects in aluminum profiles. Compared with traditional dual-CNN model fusion frameworks, the proposed architecture involves an improved fusion layer, fusion strategy, and classifier block. Specifically, the suggested method extracts the feature map of the aluminum profile RGB image from the pre-trained VGG16 model's pool5 layer and the feature map of the maximum pooling layer of the suggested A4 network, which is added after the Alexnet model. then, weighted bilinear interpolation unsamples the feature maps extracted from the maximum pooling layer of the A4 part. The network layer and upsampling schemes ensure equal feature map dimensions ensuring feature map merging utilizing an improved wavelet transform. Finally, global average pooling is employed in the classifier block instead of dense layers to reduce the model's parameters and avoid overfitting. The fused feature map is then input into the classifier block for classification. The experimental setup involves data augmentation and transfer learning to prevent overfitting due to the small-sized data sets exploited, while the K cross-validation method is employed to evaluate the model's performance during the training process. The experimental results demonstrate that the proposed dual-CNN model fusion framework attains a classification accuracy higher than current techniques, and specifically 4.3% higher than Alexnet, 2.5% for VGG16, 2.9% for Inception v3, 2.2% for VGG19, 3.6% for Resnet50, 3% for Resnet101, and 0.7% and 1.2% than the conventional dual-CNN fusion framework 1 and 2, respectively, proving the effectiveness of the proposed strategy.
Collapse
Affiliation(s)
- Xiaochen Liu
- School of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
| | - Weidong He
- School of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
| | - Yinghui Zhang
- School of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
| | - Shixuan Yao
- School of Software Engineering, Dalian University of Foreign Languages, Dalian 116044, China
| | - Ze Cui
- School of Control Science and Engineering, Dalian University of Technology, Dalian 116024, China
| |
Collapse
|
13
|
Zhao Z, Yang C, Wang Q, Zhang H, Shi L, Zhang Z. A deep learning-based method for detecting and classifying the ultrasound images of suspicious thyroid nodules. Med Phys 2021; 48:7959-7970. [PMID: 34719057 DOI: 10.1002/mp.15319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 09/30/2021] [Accepted: 10/18/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE The incidence of thyroid cancer has significantly increased in the last few decades. However, diagnosis of the thyroid nodules is labor and time intensive for radiologists and strongly depends on the personal experience of the radiologists. In this pursuit, the present study envisaged to develop a deep learning-based computer-aided diagnosis (CAD) method that enabled the automatic detection and classification of suspicious thyroid nodules in order to reduce the unnecessary fine-needle aspiration biopsy. METHODS The CAD method consisted of two main parts: detecting the location of thyroid nodules using a multiscale detection network and classifying the detected thyroid nodules by an attention-based classification network. RESULTS The performance of the proposed method was evaluated and compared with that of other state-of-the-art deep learning methods and experienced radiologists. The proposed detection method outperformed three other detection architectures (average precision, 82.1% vs. 78.3%, 77.2%, and 74.8%). Moreover, the classification method showed a superior performance compared with four other state-of-the-art classification networks (accuracy, 94.8% vs. 91.2%, 85.0%, 80.8%, and 72.1%) and that by experienced radiologists (mean value of area under the curve, 0.941 vs. 0.833). CONCLUSIONS Our study verified the high efficiency of the proposed detection method. The findings can help improve the diagnostic performance of radiologists. However, the developed CAD system requires more training and evaluation in a large-population study.
Collapse
Affiliation(s)
- Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Congmin Yang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Qian Wang
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Huawei Zhang
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Linlin Shi
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Zhiwen Zhang
- Department of Ultrasound, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| |
Collapse
|
14
|
Liu Y, Han L, Wang H, Yin B. Classification of papillary thyroid carcinoma histological images based on deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-210100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.
Collapse
Affiliation(s)
- Yaning Liu
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| | - Lin Han
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, China
| | - Hexiang Wang
- Department of Pathology, Qingdao Hospital of Traditional Chinese Medicine, Qingdao, China
| | - Bo Yin
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| |
Collapse
|