1
|
Wang DD, Lin S, Lyu GR. Advances in the Application of Artificial Intelligence in the Ultrasound Diagnosis of Vulnerable Carotid Atherosclerotic Plaque. ULTRASOUND IN MEDICINE & BIOLOGY 2025:S0301-5629(24)00467-8. [PMID: 39828500 DOI: 10.1016/j.ultrasmedbio.2024.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/16/2024] [Accepted: 12/17/2024] [Indexed: 01/22/2025]
Abstract
Vulnerable atherosclerotic plaque is a type of plaque that poses a significant risk of high mortality in patients with cardiovascular disease. Ultrasound has long been used for carotid atherosclerosis screening and plaque assessment due to its safety, low cost and non-invasive nature. However, conventional ultrasound techniques have limitations such as subjectivity, operator dependence, and low inter-observer agreement, leading to inconsistent and possibly inaccurate diagnoses. In recent years, a promising approach to address these limitations has emerged through the integration of artificial intelligence (AI) into ultrasound imaging. It was found that by training AI algorithms with large data sets of ultrasound images, the technology can learn to recognize specific characteristics and patterns associated with vulnerable plaques. This allows for a more objective and consistent assessment, leading to improved diagnostic accuracy. This article reviews the application of AI in the field of diagnostic ultrasound, with a particular focus on carotid vulnerable plaques, and discusses the limitations and prospects of AI-assisted ultrasound. This review also provides a deeper understanding of the role of AI in diagnostic ultrasound and promotes more research in the field.
Collapse
Affiliation(s)
- Dan-Dan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Shu Lin
- Centre of Neurological and Metabolic Research, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Group of Neuroendocrinology, Garvan Institute of Medical Research, Sydney, Australia
| | - Guo-Rong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Departments of Medical Imaging, Quanzhou Medical College, Quanzhou, China.
| |
Collapse
|
2
|
Miura D, Suenaga H, Hiwatashi R, Mabu S. Liver fibrosis stage classification in stacked microvascular images based on deep learning. BMC Med Imaging 2025; 25:8. [PMID: 39773130 PMCID: PMC11706143 DOI: 10.1186/s12880-024-01531-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 12/16/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND Monitoring fibrosis in patients with chronic liver disease (CLD) is an important management strategy. We have already reported a novel stacked microvascular imaging (SMVI) technique and an examiner scoring evaluation method to improve fibrosis assessment accuracy and demonstrate its high sensitivity. In the present study, we analyzed the effectiveness and objectivity of SMVI in diagnosing the liver fibrosis stage based on artificial intelligence (AI). METHODS This single-center, cross-sectional study included 517 patients with CLD who underwent ultrasonography and liver stiffness testing between August 2019 and October 2022. A convolutional neural network model was constructed to evaluate the degree of liver fibrosis from stacked microvascular images generated by accumulating high-sensitivity Doppler (i.e., high-definition color) images from these patients. In contrast, as a method of judgment by the human eye, we focused on three hallmarks of intrahepatic microvessel morphological changes in the stacked microvascular images: narrowing, caliber irregularity, and tortuosity. The degree of liver fibrosis was classified into five stages according to etiology based on liver stiffness measurement: F0-1Low (< 5.0 kPa), F0-1High (≥ 5.0 kPa), F2, F3, and F4. RESULTS The AI classification accuracy was 53.8% for a 5-class classification, 66.3% for a 3-class classification (F0-1Low vs. F0-1High vs. F2-4), and 83.8% for a 2-class classification (F0-1 vs. F2-4). The diagnostic accuracy for ≥ F2 was 81.6% in the examiner's score assessment, compared with 83.8% in AI assessment, indicating that AI achieved higher diagnostic accuracy. Similarly, AI demonstrated higher sensitivity and specificity of 84.2% and 83.5%, respectively. Comparing human judgement with AI judgement, the AI analysis was a superior model with a higher F1 score in the 2-class classification. CONCLUSIONS In detecting significant fibrosis (≥ F2) using the SMVI method, AI-based assessments are more accurate than human judgement; moreover, AI-based SMVI analysis eliminating human subjectivity bias and determining patients with objective fibrosis development is considered an important improvement.
Collapse
Affiliation(s)
- Daisuke Miura
- Department of Ultrasound and Clinical Laboratory, Fukuoka Tokushukai Hospital, Fukuoka, 816-0864, Japan
- Department of Laboratory Science, Yamaguchi University Graduate School of Medicine, Yamaguchi, 755-8508, Japan
| | - Hiromi Suenaga
- Department of Laboratory Science, Yamaguchi University Graduate School of Medicine, Yamaguchi, 755-8508, Japan.
| | - Rino Hiwatashi
- Department of Ultrasound and Clinical Laboratory, Fukuoka Tokushukai Hospital, Fukuoka, 816-0864, Japan
| | - Shingo Mabu
- Department of Information Science and Engineering, Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, 755-8611, Japan
| |
Collapse
|
3
|
Wu L, Zhou Y, Liu M, Huang S, Su Y, Lai X, Bai S, Yang K, Jiang Y, Cui C, Shi S, Xu J, Xu N, Dong F. Video-based AI module with raw-scale and ROI-scale information for thyroid nodule diagnosis. Heliyon 2024; 10:e37924. [PMID: 39391469 PMCID: PMC11466579 DOI: 10.1016/j.heliyon.2024.e37924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 09/11/2024] [Accepted: 09/13/2024] [Indexed: 10/12/2024] Open
Abstract
Objectives Ultrasound examination is a primary method for detecting thyroid lesions in clinical practice. Incorrect ultrasound diagnosis may lead to delayed treatment or unnecessary biopsy punctures. Therefore, our objective is to propose an artificial intelligence model to increase the precision of thyroid ultrasound diagnosis and reduce puncture rates. Methods We consecutively collected ultrasound recordings from 672 patients with 845 nodules across two Chinese hospitals. This dataset was divided into training, validation, and internal test sets in a ratio of 7:1:2. We constructed and tested six different model variants based on different video feature distillation strategies and whether additional information from ROI (Region of Interest) scales was used. The models' performances were evaluated using the internal test set and an additional external test set containing 126 nodules from a third hospital. Results The dual-stream model, which contains both raw-scale and ROI-scale streams with the time dimensional convolution layer, achieved the best performance on both internal and external test sets. On the internal test set, it achieved an AUROC (Area Under Receiver Operating Characteristic Curve) of 0.969 (95 % confidence interval, CI: 0.944-0.993) and an accuracy of 92.6 %, outperforming other variants (AUROC: 0.936-0.955, accuracy: 80.2%-88.3 %) and experienced radiologists (accuracy: 91.9 %). The AUROC of the best model in the external test was 0.931 (95 % CI: 0.890-0.972). Conclusion Integrating a dual-stream model with additional ROI scale information and the time dimensional convolution layer can improve performance in diagnosing thyroid ultrasound videos.
Collapse
Affiliation(s)
- Linghu Wu
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Yuli Zhou
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Mengmeng Liu
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Sijing Huang
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Youhuan Su
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Xiaoshu Lai
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Song Bai
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Keen Yang
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Yitao Jiang
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, 518000, China
| | - Chen Cui
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, 518000, China
| | - Siyuan Shi
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, 518000, China
| | - Jinfeng Xu
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Nan Xu
- Division of Thyroid surgery, Department of General Surgery, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Fajin Dong
- Ultrasound Department, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| |
Collapse
|
4
|
Gao S, Li Y, Luo H. Detecting thyroid nodules along with surrounding tissues and tracking nodules using motion prior in ultrasound videos. Comput Med Imaging Graph 2024; 117:102439. [PMID: 39357244 DOI: 10.1016/j.compmedimag.2024.102439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 09/24/2024] [Accepted: 09/24/2024] [Indexed: 10/04/2024]
Abstract
Ultrasound examination plays a crucial role in the clinical diagnosis of thyroid nodules. Although deep learning technology has been applied to thyroid nodule examinations, the existing methods all overlook the prior knowledge of nodules moving along a straight line in the video. We propose a new detection model, DiffusionVID-Line, and design a novel tracking algorithm, ByteTrack-Line, both of which fully leverage the prior knowledge of linear motion of nodules in thyroid ultrasound videos. Among them, ByteTrack-Line groups detected nodules, further reducing the workload of doctors and significantly improving their diagnostic speed and accuracy. In DiffusionVID-Line, we propose two new modules: Freq-FPN and Attn-Line. Freq-FPN module is used to extract frequency features, taking advantage of these features to reduce the impact of image blur in ultrasound videos. Based on the standard practice of segmented scanning by doctors, Attn-Line module enhances the attention on targets moving along a straight line, thus improving the accuracy of detection. In ByteTrack-Line, considering the characteristic of linear motion of nodules, we propose the Match-Line association module, which reduces the number of nodule ID switches. In the testing of the detection and tracking datasets, DiffusionVID-Line achieved a mean Average Precision (mAP50) of 74.2 for multiple tissues and 85.6 for nodules, while ByteTrack-Line achieved a Multiple Object Tracking Accuracy (MOTA) of 83.4. Both nodule detection and tracking have achieved state-of-the-art performance.
Collapse
Affiliation(s)
- Song Gao
- Jiangsu Provincial Engineering Research Center of Intelligent Technology for Healthcare, Ministry of Education, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Yueyang Li
- Jiangsu Provincial Engineering Research Center of Intelligent Technology for Healthcare, Ministry of Education, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China.
| | - Haichi Luo
- College of Internet of Things Engineering, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| |
Collapse
|
5
|
Bosco E, Spairani E, Toffali E, Meacci V, Ramalli A, Matrone G. A Deep Learning Approach for Beamforming and Contrast Enhancement of Ultrasound Images in Monostatic Synthetic Aperture Imaging: A Proof-of-Concept. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:376-382. [PMID: 38899024 PMCID: PMC11186640 DOI: 10.1109/ojemb.2024.3401098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In this study, we demonstrate that a deep neural network (DNN) can be trained to reconstruct high-contrast images, resembling those produced by the multistatic Synthetic Aperture (SA) method using a 128-element array, leveraging pre-beamforming radiofrequency (RF) signals acquired through the monostatic SA approach. Methods: A U-net was trained using 27200 pairs of RF signals, simulated considering a monostatic SA architecture, with their corresponding delay-and-sum beamformed target images in a multistatic 128-element SA configuration. The contrast was assessed on 500 simulated test images of anechoic/hyperechoic targets. The DNN's performance in reconstructing experimental images of a phantom and different in vivo scenarios was tested too. Results: The DNN, compared to the simple monostatic SA approach used to acquire pre-beamforming signals, generated better-quality images with higher contrast and reduced noise/artifacts. Conclusions: The obtained results suggest the potential for the development of a single-channel setup, simultaneously providing good-quality images and reducing hardware complexity.
Collapse
Affiliation(s)
- Edoardo Bosco
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Edoardo Spairani
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Eleonora Toffali
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Valentino Meacci
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Alessandro Ramalli
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Giulia Matrone
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| |
Collapse
|
6
|
Xu K, You K, Zhu B, Feng M, Feng D, Yang C. Masked Modeling-Based Ultrasound Image Classification via Self-Supervised Learning. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:226-237. [PMID: 38606402 PMCID: PMC11008806 DOI: 10.1109/ojemb.2024.3374966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 02/21/2024] [Accepted: 03/05/2024] [Indexed: 04/13/2024] Open
Abstract
Recently, deep learning-based methods have emerged as the preferred approach for ultrasound data analysis. However, these methods often require large-scale annotated datasets for training deep models, which are not readily available in practical scenarios. Additionally, the presence of speckle noise and other imaging artifacts can introduce numerous hard examples for ultrasound data classification. In this paper, drawing inspiration from self-supervised learning techniques, we present a pre-training method based on mask modeling specifically designed for ultrasound data. Our study investigates three different mask modeling strategies: random masking, vertical masking, and horizontal masking. By employing these strategies, our pre-training approach aims to predict the masked portion of the ultrasound images. Notably, our method does not rely on externally labeled data, allowing us to extract representative features without the need for human annotation. Consequently, we can leverage unlabeled datasets for pre-training. Furthermore, to address the challenges posed by hard samples in ultrasound data, we propose a novel hard sample mining strategy. To evaluate the effectiveness of our proposed method, we conduct experiments on two datasets. The experimental results demonstrate that our approach outperforms other state-of-the-art methods in ultrasound image classification. This indicates the superiority of our pre-training method and its ability to extract discriminative features from ultrasound data, even in the presence of hard examples.
Collapse
Affiliation(s)
- Kele Xu
- National University of Defense TechnologyChangsha410073China
| | - Kang You
- Shanghai Jiao Tong UniversityShanghai200240China
| | - Boqing Zhu
- National University of Defense TechnologyChangsha410073China
| | - Ming Feng
- TongJi UniversityShanghai200070China
| | - Dawei Feng
- National University of Defense TechnologyChangsha410073China
| | - Cheng Yang
- National University of Defense TechnologyChangsha410073China
| |
Collapse
|
7
|
Wan H, Chen S, Ni Y, Qi S, Qu H. Advance of Thyroid Nodule Ultrasound Diagnosis Based on Deep Learning. MECHANISMS AND MACHINE SCIENCE 2024:1089-1098. [DOI: 10.1007/978-3-031-44947-5_84] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
8
|
Ma J, Kong D, Wu F, Bao L, Yuan J, Liu Y. Densely connected convolutional networks for ultrasound image based lesion segmentation. Comput Biol Med 2024; 168:107725. [PMID: 38006827 DOI: 10.1016/j.compbiomed.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
Delineating lesion boundaries play a central role in diagnosing thyroid and breast cancers, making related therapy plans and evaluating therapeutic effects. However, it is often time-consuming and error-prone with limited reproducibility to manually annotate low-quality ultrasound (US) images, given high speckle noises, heterogeneous appearances, ambiguous boundaries etc., especially for nodular lesions with huge intra-class variance. It is hence appreciative but challenging for accurate lesion segmentations from US images in clinical practices. In this study, we propose a new densely connected convolutional network (called MDenseNet) architecture to automatically segment nodular lesions from 2D US images, which is first pre-trained over ImageNet database (called PMDenseNet) and then retrained upon the given US image datasets. Moreover, we also designed a deep MDenseNet with pre-training strategy (PDMDenseNet) for segmentation of thyroid and breast nodules by adding a dense block to increase the depth of our MDenseNet. Extensive experiments demonstrate that the proposed MDenseNet-based method can accurately extract multiple nodular lesions, with even complex shapes, from input thyroid and breast US images. Moreover, additional experiments show that the introduced MDenseNet-based method also outperforms three state-of-the-art convolutional neural networks in terms of accuracy and reproducibility. Meanwhile, promising results in nodular lesion segmentation from thyroid and breast US images illustrate its great potential in many other clinical segmentation tasks.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Integrated Circuits, Shandong University, Jinan 250101, China; Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, China; State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Fa Wu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Jing Yuan
- School of Mathematics and Statistics, Xidian University, China
| | - Yusheng Liu
- State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
9
|
Wang R, Zhou H, Fu P, Shen H, Bai Y. A Multiscale Attentional Unet Model for Automatic Segmentation in Medical Ultrasound Images. ULTRASONIC IMAGING 2023; 45:159-174. [PMID: 37114669 DOI: 10.1177/01617346231169789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Ultrasonography has become an essential part of clinical diagnosis owing to its noninvasive, and real-time nature. To assist diagnosis, automatically segmenting a region of interest (ROI) in ultrasound images is becoming a vital part of computer-aided diagnosis (CAD). However, segmenting ROIs on medical images with relatively low contrast is a challenging task. To better achieve medical ROI segmentation, we propose an efficient module denoted as multiscale attentional convolution (MSAC), utilizing cascaded convolutions and a self-attention approach to concatenate features from various receptive field scales. Then, MSAC-Unet is constructed based on Unet, employing MSAC instead of the standard convolution in each encoder and decoder for segmentation. In this study, two representative types of ultrasound images, one of the thyroid nodules and the other of the brachial plexus nerves, were used to assess the effectiveness of the proposed approach. The best segmentation results from MSAC-Unet were achieved on two thyroid nodule datasets (TND-PUH3 and DDTI) and a brachial plexus nerve dataset (NSD) with Dice coefficients of 0.822, 0.792, and 0.746, respectively. The analysis of segmentation results shows that our MSAC-Unet greatly improves the segmentation accuracy with more reliable ROI edges and boundaries, decreasing the number of erroneously segmented ROIs in ultrasound images.
Collapse
Affiliation(s)
- Rui Wang
- Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Institute of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing, China
| | - Haoyuan Zhou
- Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Institute of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing, China
| | - Peng Fu
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Hui Shen
- Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Institute of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing, China
| | - Yang Bai
- Department of General Surgery, Peking University Third Hospital, Beijing, China
| |
Collapse
|
10
|
Xu W, Jia X, Mei Z, Gu X, Lu Y, Fu CC, Zhang R, Gu Y, Chen X, Luo X, Li N, Bai B, Li Q, Yan J, Zhai H, Guan L, Gong B, Zhao K, Fang Q, He C, Zhan W, Luo T, Zhang H, Dong Y, Zhou J. Generalizability and Diagnostic Performance of AI Models for Thyroid US. Radiology 2023; 307:e221157. [PMID: 37338356 DOI: 10.1148/radiol.221157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
Background Artificial intelligence (AI) models have improved US assessment of thyroid nodules; however, the lack of generalizability limits the application of these models. Purpose To develop AI models for segmentation and classification of thyroid nodules in US using diverse data sets from nationwide hospitals and multiple vendors, and to measure the impact of the AI models on diagnostic performance. Materials and Methods This retrospective study included consecutive patients with pathologically confirmed thyroid nodules who underwent US using equipment from 12 vendors at 208 hospitals across China from November 2017 to January 2019. The detection, segmentation, and classification models were developed based on the subset or complete set of images. Model performance was evaluated by precision and recall, Dice coefficient, and area under the receiver operating characteristic curve (AUC) analyses. Three scenarios (diagnosis without AI assistance, with freestyle AI assistance, and with rule-based AI assistance) were compared with three senior and three junior radiologists to optimize incorporation of AI into clinical practice. Results A total of 10 023 patients (median age, 46 years [IQR 37-55 years]; 7669 female) were included. The detection, segmentation, and classification models had an average precision, Dice coefficient, and AUC of 0.98 (95% CI: 0.96, 0.99), 0.86 (95% CI: 0.86, 0.87), and 0.90 (95% CI: 0.88, 0.92), respectively. The segmentation model trained on the nationwide data and classification model trained on the mixed vendor data exhibited the best performance, with a Dice coefficient of 0.91 (95% CI: 0.90, 0.91) and AUC of 0.98 (95% CI: 0.97, 1.00), respectively. The AI model outperformed all senior and junior radiologists (P < .05 for all comparisons), and the diagnostic accuracies of all radiologists were improved (P < .05 for all comparisons) with rule-based AI assistance. Conclusion Thyroid US AI models developed from diverse data sets had high diagnostic performance among the Chinese population. Rule-based AI assistance improved the performance of radiologists in thyroid cancer diagnosis. © RSNA, 2023 Supplemental material is available for this article.
Collapse
Affiliation(s)
- WenWen Xu
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - XiaoHong Jia
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - ZiHan Mei
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - XiaoLin Gu
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Yang Lu
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Chi-Cheng Fu
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - RuiFang Zhang
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Ying Gu
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Xia Chen
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - XiaoMao Luo
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Ning Li
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - BaoYan Bai
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - QiaoYing Li
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - JiPing Yan
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Hong Zhai
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Ling Guan
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Bing Gong
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - KeYang Zhao
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Qu Fang
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Chuan He
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - WeiWei Zhan
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - Ting Luo
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - HuiTing Zhang
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - YiJie Dong
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| | - JianQiao Zhou
- From the Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin Er Road, 200025, Shanghai, China (W.W.X., X.H.J., Z.H.M., W.W.Z., T.L., H.T.Z., Y.J.D., J.Q.Z.); Department of Scientific Research, Shanghai Aitrox Technology Corporation Limited, Shanghai, China (X.L.G., Y.L., C.C.F., K.Y.Z., Q.F., C.H.); Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China (R.F.Z.); Department of Medical Ultrasound, Affiliated Hospital of Guizhou Medical University, Guiyang, China (Y.G., X.C.); Department of Medical Ultrasound, Yunnan Cancer Hospital & The Third Affiliated Hospital of Kunming Medical University, Kunming, China (X.M.L.); Department of Ultrasound, Yunnan Kungang Hospital, The Seventh Affiliated Hospital of Dali University, Anning, China (N.L.); Department of Ultrasound, Affiliated Hospital of Yan'an University, Yan'an, China (B.Y.B.); Department of Ultrasound, Tangdu Hospital, Fourth Military Medical University, Xi'an, China (Q.Y.L.); Department of Ultrasound, Shanxi Provincial People's Hospital, Taiyuan, China (J.P.Y.); Department of Ultrasound, Traditional Chinese Medical Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang Uygur Autonomous Region, China (H.Z.); Department of Ultrasound, Gansu Provincial Cancer Hospital, Lanzhou, China (L.G.); Department of Ultrasound, Jilin Central General Hospital, Jilin, China (B.G.); and College of Health Science and Technology, Shanghai Jiaotong University School of Medicine, Shanghai, China (J.Q.Z.)
| |
Collapse
|
11
|
Guetari R, Ayari H, Sakly H. Computer-aided diagnosis systems: a comparative study of classical machine learning versus deep learning-based approaches. Knowl Inf Syst 2023; 65:1-41. [PMID: 37361377 PMCID: PMC10205571 DOI: 10.1007/s10115-023-01894-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 04/23/2023] [Accepted: 04/25/2023] [Indexed: 06/28/2023]
Abstract
The diagnostic phase of the treatment process is essential for patient guidance and follow-up. The accuracy and effectiveness of this phase can determine the life or death of a patient. For the same symptoms, different doctors may come up with different diagnoses whose treatments may, instead of curing a patient, be fatal. Machine learning (ML) brings new solutions to healthcare professionals to save time and optimize the appropriate diagnosis. ML is a data analysis method that automates the creation of analytical models and promotes predictive data. There are several ML models and algorithms that rely on features extracted from, for example, a patient's medical images to indicate whether a tumor is benign or malignant. The models differ in the way they operate and the method used to extract the discriminative features of the tumor. In this article, we review different ML models for tumor classification and COVID-19 infection to evaluate the different works. The computer-aided diagnosis (CAD) systems, which we referred to as classical, are based on accurate feature identification, usually performed manually or with other ML techniques that are not involved in classification. The deep learning-based CAD systems automatically perform the identification and extraction of discriminative features. The results show that the two types of DAC have quite close performances but the use of one or the other type depends on the datasets. Indeed, manual feature extraction is necessary when the size of the dataset is small; otherwise, deep learning is used.
Collapse
Affiliation(s)
- Ramzi Guetari
- SERCOM Laboratory, Polytechnic School of Tunisia, University of Carthage, PO Box 743, La Marsa, 2078 Tunisia
| | - Helmi Ayari
- SERCOM Laboratory, Polytechnic School of Tunisia, University of Carthage, PO Box 743, La Marsa, 2078 Tunisia
| | - Houneida Sakly
- RIADI Laboratory, National School of Computer Sciences, University of Manouba, Manouba, 2010 Tunisia
| |
Collapse
|
12
|
Cao CL, Li QL, Tong J, Shi LN, Li WX, Xu Y, Cheng J, Du TT, Li J, Cui XW. Artificial intelligence in thyroid ultrasound. Front Oncol 2023; 13:1060702. [PMID: 37251934 PMCID: PMC10213248 DOI: 10.3389/fonc.2023.1060702] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 04/07/2023] [Indexed: 05/31/2023] Open
Abstract
Artificial intelligence (AI), particularly deep learning (DL) algorithms, has demonstrated remarkable progress in image-recognition tasks, enabling the automatic quantitative assessment of complex medical images with increased accuracy and efficiency. AI is widely used and is becoming increasingly popular in the field of ultrasound. The rising incidence of thyroid cancer and the workload of physicians have driven the need to utilize AI to efficiently process thyroid ultrasound images. Therefore, leveraging AI in thyroid cancer ultrasound screening and diagnosis cannot only help radiologists achieve more accurate and efficient imaging diagnosis but also reduce their workload. In this paper, we aim to present a comprehensive overview of the technical knowledge of AI with a focus on traditional machine learning (ML) algorithms and DL algorithms. We will also discuss their clinical applications in the ultrasound imaging of thyroid diseases, particularly in differentiating between benign and malignant nodules and predicting cervical lymph node metastasis in thyroid cancer. Finally, we will conclude that AI technology holds great promise for improving the accuracy of thyroid disease ultrasound diagnosis and discuss the potential prospects of AI in this field.
Collapse
Affiliation(s)
- Chun-Li Cao
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
- NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China
| | - Qiao-Li Li
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
- NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China
| | - Jin Tong
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
| | - Li-Nan Shi
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
- NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China
| | - Wen-Xiao Li
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
- NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China
| | - Ya Xu
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
| | - Jing Cheng
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
| | - Ting-Ting Du
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
| | - Jun Li
- Department of Ultrasound, The First Affiliated Hospital of Shihezi University, Shihezi, China
- NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| |
Collapse
|
13
|
Zheng T, Qin H, Cui Y, Wang R, Zhao W, Zhang S, Geng S, Zhao L. Segmentation of thyroid glands and nodules in ultrasound images using the improved U-Net architecture. BMC Med Imaging 2023; 23:56. [PMID: 37060061 PMCID: PMC10105426 DOI: 10.1186/s12880-023-01011-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 04/05/2023] [Indexed: 04/16/2023] Open
Abstract
BACKGROUND Identifying thyroid nodules' boundaries is crucial for making an accurate clinical assessment. However, manual segmentation is time-consuming. This paper utilized U-Net and its improved methods to automatically segment thyroid nodules and glands. METHODS The 5822 ultrasound images used in the experiment came from two centers, 4658 images were used as the training dataset, and 1164 images were used as the independent mixed test dataset finally. Based on U-Net, deformable-pyramid split-attention residual U-Net (DSRU-Net) by introducing ResNeSt block, atrous spatial pyramid pooling, and deformable convolution v3 was proposed. This method combined context information and extracts features of interest better, and had advantages in segmenting nodules and glands of different shapes and sizes. RESULTS DSRU-Net obtained 85.8% mean Intersection over Union, 92.5% mean dice coefficient and 94.1% nodule dice coefficient, which were increased by 1.8%, 1.3% and 1.9% compared with U-Net. CONCLUSIONS Our method is more capable of identifying and segmenting glands and nodules than the original method, as shown by the results of correlational studies.
Collapse
Affiliation(s)
- Tianlei Zheng
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, 221116, China
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Hang Qin
- Department of Medical Equipment Management, Nanjing First Hospital, Nanjing, 221000, China
| | - Yingying Cui
- Department of Pathology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Rong Wang
- Department of Ultrasound Medicine, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Weiguo Zhao
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Shijin Zhang
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Shi Geng
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China
| | - Lei Zhao
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, 221004, China.
| |
Collapse
|
14
|
Hasan Z, Key S, Habib AR, Wong E, Aweidah L, Kumar A, Sacks R, Singh N. Convolutional Neural Networks in ENT Radiology: Systematic Review of the Literature. Ann Otol Rhinol Laryngol 2023; 132:417-430. [PMID: 35651308 DOI: 10.1177/00034894221095899] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Convolutional neural networks (CNNs) represent a state-of-the-art methodological technique in AI and deep learning, and were specifically created for image classification and computer vision tasks. CNNs have been applied in radiology in a number of different disciplines, mostly outside otolaryngology, potentially due to a lack of familiarity with this technology within the otolaryngology community. CNNs have the potential to revolutionize clinical practice by reducing the time required to perform manual tasks. This literature search aims to present a comprehensive systematic review of the published literature with regard to CNNs and their utility to date in ENT radiology. METHODS Data were extracted from a variety of databases including PubMED, Proquest, MEDLINE Open Knowledge Maps, and Gale OneFile Computer Science. Medical subject headings (MeSH) terms and keywords were used to extract related literature from each databases inception to October 2020. Inclusion criteria were studies where CNNs were used as the main intervention and CNNs focusing on radiology relevant to ENT. Titles and abstracts were reviewed followed by the contents. Once the final list of articles was obtained, their reference lists were also searched to identify further articles. RESULTS Thirty articles were identified for inclusion in this study. Studies utilizing CNNs in most ENT subspecialties were identified. Studies utilized CNNs for a number of tasks including identification of structures, presence of pathology, and segmentation of tumors for radiotherapy planning. All studies reported a high degree of accuracy of CNNs in performing the chosen task. CONCLUSION This study provides a better understanding of CNN methodology used in ENT radiology demonstrating a myriad of potential uses for this exciting technology including nodule and tumor identification, identification of anatomical variation, and segmentation of tumors. It is anticipated that this field will continue to evolve and these technologies and methodologies will become more entrenched in our everyday practice.
Collapse
Affiliation(s)
- Zubair Hasan
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Seraphina Key
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, VIC, Australia
| | - Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Woolloongabba, QLD, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| | - Layal Aweidah
- Faculty of Medicine, University of Notre Dame, Darlinghurst, NSW, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Darlington, NSW, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Concord Hospital, Concord, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Camperdown, NSW, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, NSW, Australia
| |
Collapse
|
15
|
Göreke V. A Novel Deep-Learning-Based CADx Architecture for Classification of Thyroid Nodules Using Ultrasound Images. Interdiscip Sci 2023:10.1007/s12539-023-00560-4. [PMID: 36976511 PMCID: PMC10043860 DOI: 10.1007/s12539-023-00560-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 03/03/2023] [Accepted: 03/05/2023] [Indexed: 03/29/2023]
Abstract
Nodules of thyroid cancer occur in the cells of the thyroid as benign or malign types. Thyroid sonographic images are mostly used for diagnosis of thyroid cancer. The aim of this study is to introduce a computer-aided diagnosis system that can classify the thyroid nodules with high accuracy using the data gathered from ultrasound images. Acquisition and labeling of sub-images were performed by a specialist physician. Then the number of these sub-images were increased using data augmentation methods. Deep features were obtained from the images using a pre-trained deep neural network. The dimensions of the features were reduced and features were improved. The improved features were combined with morphological and texture features. This feature group was rated by a value called similarity coefficient value which was obtained from a similarity coefficient generator module. The nodules were classified as benign or malignant using a multi-layer deep neural network with a pre-weighting layer designed with a novel approach. In this study, a novel multi-layer computer-aided diagnosis system was proposed for thyroid cancer detection. In the first layer of the system, a novel feature extraction method based on the class similarity of images was developed. In the second layer, a novel pre-weighting layer was proposed by modifying the genetic algorithm. The proposed system showed superior performance in different metrics compared to the literature.
Collapse
Affiliation(s)
- Volkan Göreke
- Department of Computer Technologies, Sivas Vocational School of Technical Sciences, Sivas Cumhuriyet University, 58140, Sivas, Türkiye.
| |
Collapse
|
16
|
Liu T, Wu C, Wang G, Jia Y, Zhu Y, Nie F. Clinical Value of Artificial Intelligence-Based Computer-Aided Diagnosis System Versus Contrast-Enhanced Ultrasound for Differentiation of Benign From Malignant Thyroid Nodules in Different Backgrounds. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36794594 DOI: 10.1002/jum.16195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 01/29/2023] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES The aim of this study was to compare the value of AI-SONIC ultrasound-assisted diagnosis system versus contrast-enhanced ultrasound (CEUS) for differential diagnosis of thyroid nodules in diffuse and non-diffuse backgrounds. METHODS A total of 555 thyroid nodules with pathologically confirmed diagnosis were included in this retrospective study. The diagnostic efficacies of AI-SONIC and CEUS for differentiating benign from malignant nodules in diffuse and non-diffuse backgrounds were evaluated, with pathological diagnosis as the gold standard. RESULTS The agreement between AI-SONIC diagnosis and pathological diagnosis was moderate in diffuse backgrounds (κ = 0.417) and almost perfect in non-diffuse backgrounds (κ = 0.81). The agreement between CEUS diagnosis and pathological diagnosis was substantial in diffuse backgrounds (κ = 0.684) and moderate in non-diffuse backgrounds (κ = 0.407). In diffuse backgrounds, AI-SONIC had slightly higher sensitivity (95.7 vs 89.4%, P = .375), but CEUS had significantly higher specificity (80.0 vs 40.0%, P = .008). In non-diffuse background, AI-SONIC had significantly higher sensitivity (96.2 vs 73.4%, P < .001), specificity (82.9 vs 71.2%, P = .007), and negative predictive value (90.3 vs 53.3%, P < .001). CONCLUSION In non-diffuse backgrounds, AI-SONIC is superior to CEUS for differentiating malignant from benign thyroid nodules. In diffuse backgrounds, AI-SONIC could be useful for screening of cases to detect suspicious nodules requiring further examination by CEUS.
Collapse
Affiliation(s)
- Ting Liu
- Ultrasound Medicine Center, Lanzhou University Second Hospital, Lanzhou, China
| | - Chuang Wu
- Department of Magnetic Resonance, Lanzhou University Second Hospital, Lanzhou, China
| | - Guojuan Wang
- Ultrasound Medicine Center, Lanzhou University Second Hospital, Lanzhou, China
| | - Yingying Jia
- Ultrasound Medicine Center, Lanzhou University Second Hospital, Lanzhou, China
| | - Yangyang Zhu
- Ultrasound Medicine Center, Lanzhou University Second Hospital, Lanzhou, China
| | - Fang Nie
- Ultrasound Medicine Center, Lanzhou University Second Hospital, Lanzhou, China
| |
Collapse
|
17
|
Automatic Detection and Measurement of Renal Cysts in Ultrasound Images: A Deep Learning Approach. Healthcare (Basel) 2023; 11:healthcare11040484. [PMID: 36833018 PMCID: PMC9956133 DOI: 10.3390/healthcare11040484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/10/2023] Open
Abstract
Ultrasonography is widely used for diagnosis of diseases in internal organs because it is nonradioactive, noninvasive, real-time, and inexpensive. In ultrasonography, a set of measurement markers is placed at two points to measure organs and tumors, then the position and size of the target finding are measured on this basis. Among the measurement targets of abdominal ultrasonography, renal cysts occur in 20-50% of the population regardless of age. Therefore, the frequency of measurement of renal cysts in ultrasound images is high, and the effect of automating measurement would be high as well. The aim of this study was to develop a deep learning model that can automatically detect renal cysts in ultrasound images and predict the appropriate position of a pair of salient anatomical landmarks to measure their size. The deep learning model adopted fine-tuned YOLOv5 for detection of renal cysts and fine-tuned UNet++ for prediction of saliency maps, representing the position of salient landmarks. Ultrasound images were input to YOLOv5, and images cropped inside the bounding box and detected from the input image by YOLOv5 were input to UNet++. For comparison with human performance, three sonographers manually placed salient landmarks on 100 unseen items of the test data. These salient landmark positions annotated by a board-certified radiologist were used as the ground truth. We then evaluated and compared the accuracy of the sonographers and the deep learning model. Their performances were evaluated using precision-recall metrics and the measurement error. The evaluation results show that the precision and recall of our deep learning model for detection of renal cysts are comparable to standard radiologists; the positions of the salient landmarks were predicted with an accuracy close to that of the radiologists, and in a shorter time.
Collapse
|
18
|
Lai SL, Chen CS, Lin BR, Chang RF. Intraoperative Detection of Surgical Gauze Using Deep Convolutional Neural Network. Ann Biomed Eng 2023; 51:352-362. [PMID: 35972601 DOI: 10.1007/s10439-022-03033-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 07/19/2022] [Indexed: 01/25/2023]
Abstract
During laparoscopic surgery, surgical gauze is usually inserted into the body cavity to help achieve hemostasis. Retention of surgical gauze in the body cavity may necessitate reoperation and increase surgical risk. Using deep learning technology, this study aimed to propose a neural network model for gauze detection from the surgical video to record the presence of the gauze. The model was trained by the training group using YOLO (You Only Look Once)v5x6, then applied to the testing group. Positive predicted value (PPV), sensitivity, and mean average precision (mAP) were calculated. Furthermore, a timeline of gauze presence in the video was drawn by the model as well as human annotation to evaluate the accuracy. After the model was well-trained, the PPV, sensitivity, and mAP in the testing group were 0.920, 0.828, and 0.881, respectively. The inference time was 11.3 ms per image. The average accuracy of the model adding a marking and filtering process was 0.899. In conclusion, surgical gauze can be successfully detected using deep learning in the surgical video. Our model provided a fast detection of surgical gauze, allowing further real-time gauze tracing in laparoscopic surgery that may help surgeons recall the location of the missing gauze.
Collapse
Affiliation(s)
- Shuo-Lun Lai
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, No.1, Sec.4, Roosevelt Road, Taipei, 10617, Taiwan.,Division of Colorectal Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Chi-Sheng Chen
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, No.1, Sec.4, Roosevelt Road, Taipei, 10617, Taiwan
| | - Been-Ren Lin
- Division of Colorectal Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Ruey-Feng Chang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, No.1, Sec.4, Roosevelt Road, Taipei, 10617, Taiwan. .,Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
19
|
Wang L, Wang Y, Lu W, Xu D, Yao J, Wang L, Xu L. Differential regional importance mapping for thyroid nodule malignancy prediction with potential to improve needle aspiration biopsy sampling reliability. Front Oncol 2023; 13:1136922. [PMID: 37188203 PMCID: PMC10175814 DOI: 10.3389/fonc.2023.1136922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Objective Existing guidelines for ultrasound-guided fine-needle aspiration biopsy lack specifications on sampling sites, but the number of biopsies improves diagnostic reliability. We propose the use of class activation maps (CAMs) and our modified malignancy-specific heat maps that locate important deep representations of thyroid nodules for class predictions. Methods We applied adversarial noise perturbations to the segmented concentric "hot" nodular regions of equal sizes to differentiate regional importance for the malignancy diagnostic performances of an accurate ultrasound-based artificial intelligence computer-aided diagnosis (AI-CADx) system using 2,602 retrospectively collected thyroid nodules with known histopathological diagnosis. Results The AI system demonstrated high diagnostic performance with an area under the curve (AUC) value of 0.9302 and good nodule identification capability with a median dice coefficient >0.9 when compared to radiologists' segmentations. Experiments confirmed that the CAM-based heat maps reflect the differentiable importance of different nodular regions for an AI-CADx system to make its predictions. No less importantly, the hot regions in malignancy heat maps of ultrasound images in comparison with the inactivated regions of the same 100 malignant nodules randomly selected from the dataset had higher summed frequency-weighted feature scores of 6.04 versus 4.96 rated by radiologists with more than 15 years of ultrasound examination experience according to widely used ultrasound-based risk stratification American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) in terms of nodule composition, echogenicity, and echogenic foci, excluding shape and margin attributes, which could only be evaluated on the whole rather than on the sub-nodular component levels. In addition, we show examples demonstrating good spatial correspondence of highlighted regions of malignancy heat map to malignant tumor cell-rich regions in hematoxylin and eosin-stained histopathological images. Conclusion Our proposed CAM-based ultrasonographic malignancy heat map provides quantitative visualization of malignancy heterogeneity within a tumor, and it is of clinical interest to investigate in the future its usefulness to improve fine-needle aspiration biopsy (FNAB) sampling reliability by targeting potentially more suspicious sub-nodular regions.
Collapse
Affiliation(s)
- Liping Wang
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
| | - Yuan Wang
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Wenliang Lu
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China
| | - Dong Xu
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
- Department of Ultrasound, Zhejiang Society for Mathematical Medicine, Hangzhou, China
| | - Jincao Yao
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
| | - Lijing Wang
- Department of Ultrasonography, The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer, Chinese Academy of Sciences, Hangzhou, China
- *Correspondence: Lijing Wang, ; Lei Xu,
| | - Lei Xu
- Department of Ultrasound, Zhejiang Society for Mathematical Medicine, Hangzhou, China
- Group of Computational Imaging and Digital Medicine, Zhejiang Qiushi Institute for Mathematical Medicine, Hangzhou, China
- *Correspondence: Lijing Wang, ; Lei Xu,
| |
Collapse
|
20
|
Feng H, Tang Q, Yu Z, Tang H, Yin M, Wei A. A Machine Learning Applied Diagnosis Method for Subcutaneous Cyst by Ultrasonography. OXIDATIVE MEDICINE AND CELLULAR LONGEVITY 2022; 2022:1526540. [PMID: 36299601 PMCID: PMC9592196 DOI: 10.1155/2022/1526540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 09/19/2022] [Accepted: 09/28/2022] [Indexed: 11/18/2022]
Abstract
For decades, ultrasound images have been widely used in the detection of various diseases due to their high security and efficiency. However, reading ultrasound images requires years of experience and training. In order to support the diagnosis of clinicians and reduce the workload of doctors, many ultrasonic computer aided diagnostic systems have been proposed. In recent years, the success of deep learning in image classification and segmentation has made more and more scholars realize the potential performance improvement brought by the application of deep learning in ultrasonic computer-aided diagnosis systems. This study is aimed at applying several machine learning algorithms and develop a machine learning method to diagnose subcutaneous cyst. Clinical features are extracted from datasets and images of ultrasonography of 132 patients from Hunan Provincial People's Hospital in China. All datasets are separated into 70% training and 30% testing. Four kinds of machine learning algorithms including decision tree (DT), support vector machine (SVM), K-nearest neighbors (KNN), and neural networks (NN) had been approached to determine the best performance. Compared with all the results from each feature, SVM achieved the best performance from 91.7% to 100%. Results show that SVM performed the highest accuracy in the diagnosis of subcutaneous cyst by ultrasonography, which provide a good reference in further application to clinical practice of ultrasonography of subcutaneous cyst.
Collapse
Affiliation(s)
- Hao Feng
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - Qian Tang
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - Zhengyu Yu
- Faculty of Engineering and IT, University of Technology, Sydney, Sydney, NSW 2007, Australia
| | - Hua Tang
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - Ming Yin
- Department of Dermatology, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| | - An Wei
- Department of Ultrasound, Hunan Provincial People's Hospital (The First Affiliated Hospital of Hunan Normal University), Changsha 410005, China
| |
Collapse
|
21
|
Analysis of facial ultrasonography images based on deep learning. Sci Rep 2022; 12:16480. [PMID: 36182939 PMCID: PMC9526737 DOI: 10.1038/s41598-022-20969-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 09/21/2022] [Indexed: 11/28/2022] Open
Abstract
Transfer learning using a pre-trained model with the ImageNet database is frequently used when obtaining large datasets in the medical imaging field is challenging. We tried to estimate the value of deep learning for facial US images by assessing the classification performance for facial US images through transfer learning using current representative deep learning models and analyzing the classification criteria. For this clinical study, we recruited 86 individuals from whom we acquired ultrasound images of nine facial regions. To classify these facial regions, 15 deep learning models were trained using augmented or non-augmented datasets and their performance was evaluated. The F-measure scores average of all models was about 93% regardless of augmentation in the dataset, and the best performing model was the classic model VGGs. The models regarded the contours of skin and bones, rather than muscles and blood vessels, as distinct features for distinguishing regions in the facial US images. The results of this study can be used as reference data for future deep learning research on facial US images and content development.
Collapse
|
22
|
Benabdallah FZ, Djerou L. Active Contour Extension Basing on Haralick Texture Features, Multi-gene Genetic Programming, and Block Matching to Segment Thyroid in 3D Ultrasound Images. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07286-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
23
|
Zhu PS, Zhang YR, Ren JY, Li QL, Chen M, Sang T, Li WX, Li J, Cui XW. Ultrasound-based deep learning using the VGGNet model for the differentiation of benign and malignant thyroid nodules: A meta-analysis. Front Oncol 2022; 12:944859. [PMID: 36249056 PMCID: PMC9554631 DOI: 10.3389/fonc.2022.944859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 08/19/2022] [Indexed: 12/13/2022] Open
Abstract
Objective The aim of this study was to evaluate the accuracy of deep learning using the convolutional neural network VGGNet model in distinguishing benign and malignant thyroid nodules based on ultrasound images. Methods Relevant studies were selected from PubMed, Embase, Cochrane Library, China National Knowledge Infrastructure (CNKI), and Wanfang databases, which used the deep learning-related convolutional neural network VGGNet model to classify benign and malignant thyroid nodules based on ultrasound images. Cytology and pathology were used as gold standards. Furthermore, reported eligibility and risk bias were assessed using the QUADAS-2 tool, and the diagnostic accuracy of deep learning VGGNet was analyzed with pooled sensitivity, pooled specificity, diagnostic odds ratio, and the area under the curve. Results A total of 11 studies were included in this meta-analysis. The overall estimates of sensitivity and specificity were 0.87 [95% CI (0.83, 0.91)] and 0.85 [95% CI (0.79, 0.90)], respectively. The diagnostic odds ratio was 38.79 [95% CI (22.49, 66.91)]. The area under the curve was 0.93 [95% CI (0.90, 0.95)]. No obvious publication bias was found. Conclusion Deep learning using the convolutional neural network VGGNet model based on ultrasound images performed good diagnostic efficacy in distinguishing benign and malignant thyroid nodules. Systematic Review Registration https://www.crd.york.ac.nk/prospero, identifier CRD42022336701.
Collapse
Affiliation(s)
- Pei-Shan Zhu
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Yu-Rui Zhang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jia-Yu Ren
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qiao-Li Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Ming Chen
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Tian Sang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Wen-Xiao Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jun Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China,NHC Key Laboratory of Prevention and Treatment of Central Asia High Incidence Diseases, First Affiliated Hospital, School of Medicine, Shihezi University, Shihezi, China,*Correspondence: Jun Li, ; Xin-Wu Cui,
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China,*Correspondence: Jun Li, ; Xin-Wu Cui,
| |
Collapse
|
24
|
Manh VT, Zhou J, Jia X, Lin Z, Xu W, Mei Z, Dong Y, Yang X, Huang R, Ni D. Multi-Attribute Attention Network for Interpretable Diagnosis of Thyroid Nodules in Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2611-2620. [PMID: 35820014 DOI: 10.1109/tuffc.2022.3190012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Ultrasound (US) is the primary imaging technique for the diagnosis of thyroid cancer. However, accurate identification of nodule malignancy is a challenging task that can elude less-experienced clinicians. Recently, many computer-aided diagnosis (CAD) systems have been proposed to assist this process. However, most of them do not provide the reasoning of their classification process, which may jeopardize their credibility in practical use. To overcome this, we propose a novel deep learning (DL) framework called multi-attribute attention network (MAA-Net) that is designed to mimic the clinical diagnosis process. The proposed model learns to predict nodular attributes and infer their malignancy based on these clinically-relevant features. A multi-attention scheme is adopted to generate customized attention to improve each task and malignancy diagnosis. Furthermore, MAA-Net utilizes nodule delineations as nodules spatial prior guidance for the training rather than cropping the nodules with additional models or human interventions to prevent losing the context information. Validation experiments were performed on a large and challenging dataset containing 4554 patients. Results show that the proposed method outperformed other state-of-the-art methods and provides interpretable predictions that may better suit clinical needs.
Collapse
|
25
|
Wang G, Nie F, Wang Y, Yang D, Dong T, Liu T, Wang P. Differential diagnosis of thyroid nodules by the Demetics ultrasound-assisted diagnosis system and contrast-enhanced ultrasound combined with thyroid image reporting and data systems. Clin Endocrinol (Oxf) 2022; 97:116-123. [PMID: 35441715 DOI: 10.1111/cen.14741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 04/06/2022] [Accepted: 04/08/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND More and more new ultrasound techniques with their own characteristics are applied in the differential diagnosis of thyroid nodules. This study amied to assess and compare the diagnostic value of the Demetics ultrasound-assisted diagnosis system and contrast-enhanced ultrasound (CEUS) combined with the Thyroid Image Reporting and Data Systems (TI-RADS) for thyroid nodules. DESIGN AND PATIENTS A total of 600 thyroid nodules with pathological findings were retrospectively analysed. Demetics and CEUS were performed for all nodules. The diagnostic efficacy of Demetics and CEUS for nodules of different sizes was evaluated and compared in terms of sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (LR+) and negative likelihood ratio (LR-). The characteristics of nodules diagnosed and misdiagnosed by Demetics were compared to analyse the factors affecting the diagnostic accuracy of Demetics. The necessity of CEUS for nodules that are prone to misdiagnosis in Demetics was assessed. RESULTS Both Demetics and CEUS can be used for the differential diagnosis of benign and malignant thyroid nodules of different sizes. The diagnostic agreement between Demetics and CEUS for thyroid nodules of different sizes was moderate, substantial and fair, respectively. The sensitivity and NPV of Demetics were higher than those of CEUS, and the specificity, PPV and LR+ of CEUS were higher than that of Demetics. The LR- of Demetics was lower than those of CEUS. There were significant differences in age, calcification and margin in analysing the factors affecting Demetics. CEUS correctly diagnosed 50 of the 101 nodules misdiagnosed by Demetics. CONCLUSIONS Demetics showed high sensitivity in diagnosing thyroid nodules, while CEUS showed high specificity. In clinical practice, CEUS can further improve the diagnostic accuracy for nodules that are easily misdiagnosed by Demetics.
Collapse
Affiliation(s)
- Guojuan Wang
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| | - Fang Nie
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| | - Yanfang Wang
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| | - Dan Yang
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| | - Tiantian Dong
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| | - Ting Liu
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| | - Peihua Wang
- Department of Ultrasound, Lanzhou University Second Hospital, Lanzhou, Gansu, China
- Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
- Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China
| |
Collapse
|
26
|
Breast Tumor Ultrasound Image Segmentation Method Based on Improved Residual U-Net Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3905998. [PMID: 35795762 PMCID: PMC9252688 DOI: 10.1155/2022/3905998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/25/2022]
Abstract
In order to achieve efficient and accurate breast tumor recognition and diagnosis, this paper proposes a breast tumor ultrasound image segmentation method based on U-Net framework, combined with residual block and attention mechanism. In this method, the residual block is introduced into U-Net network for improvement to avoid the degradation of model performance caused by the gradient disappearance and reduce the training difficulty of deep network. At the same time, considering the features of spatial and channel attention, a fusion attention mechanism is proposed to be introduced into the image analysis model to improve the ability to obtain the feature information of ultrasound images and realize the accurate recognition and extraction of breast tumors. The experimental results show that the Dice index value of the proposed method can reach 0.921, which shows excellent image segmentation performance.
Collapse
|
27
|
Bonmati E, Hu Y, Grimwood A, Johnson GJ, Goodchild G, Keane MG, Gurusamy K, Davidson B, Clarkson MJ, Pereira SP, Barratt DC. Voice-Assisted Image Labeling for Endoscopic Ultrasound Classification Using Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1311-1319. [PMID: 34962866 DOI: 10.1109/tmi.2021.3139023] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrasound imaging is a commonly used technology for visualising patient anatomy in real-time during diagnostic and therapeutic procedures. High operator dependency and low reproducibility make ultrasound imaging and interpretation challenging with a steep learning curve. Automatic image classification using deep learning has the potential to overcome some of these challenges by supporting ultrasound training in novices, as well as aiding ultrasound image interpretation in patient with complex pathology for more experienced practitioners. However, the use of deep learning methods requires a large amount of data in order to provide accurate results. Labelling large ultrasound datasets is a challenging task because labels are retrospectively assigned to 2D images without the 3D spatial context available in vivo or that would be inferred while visually tracking structures between frames during the procedure. In this work, we propose a multi-modal convolutional neural network (CNN) architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure. We use a CNN composed of two branches, one for voice data and another for image data, which are joined to predict image labels from the spoken names of anatomical landmarks. The network was trained using recorded verbal comments from expert operators. Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels. We conclude that the addition of spoken commentaries can increase the performance of ultrasound image classification, and eliminate the burden of manually labelling large EUS datasets necessary for deep learning applications.
Collapse
|
28
|
Hong W, Sheng Q, Dong B, Wu L, Chen L, Zhao L, Liu Y, Zhu J, Liu Y, Xie Y, Yu Y, Wang H, Yuan J, Ge T, Zhao L, Liu X, Zhang Y. Automatic Detection of Secundum Atrial Septal Defect in Children Based on Color Doppler Echocardiographic Images Using Convolutional Neural Networks. Front Cardiovasc Med 2022; 9:834285. [PMID: 35463790 PMCID: PMC9019069 DOI: 10.3389/fcvm.2022.834285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 02/24/2022] [Indexed: 11/13/2022] Open
Abstract
Secundum atrial septal defect (ASD) is one of the most common congenital heart diseases (CHDs). This study aims to evaluate the feasibility and accuracy of automatic detection of ASD in children based on color Doppler echocardiographic images using convolutional neural networks. In this study, we propose a fully automatic detection system for ASD, which includes three stages. The first stage is used to identify four target echocardiographic views (that is, the subcostal view focusing on the atrium septum, the apical four-chamber view, the low parasternal four-chamber view, and the parasternal short-axis view). These four echocardiographic views are most useful for the diagnosis of ASD clinically. The second stage aims to segment the target cardiac structure and detect candidates for ASD. The third stage is to infer the final detection by utilizing the segmentation and detection results of the second stage. The proposed ASD detection system was developed and validated using a training set of 4,031 cases containing 370,057 echocardiographic images and an independent test set of 229 cases containing 203,619 images, of which 105 cases with ASD and 124 cases with intact atrial septum. Experimental results showed that the proposed ASD detection system achieved accuracy, recall, precision, specificity, and F1 score of 0.8833, 0.8545, 0.8577, 0.9136, and 0.8546, respectively on the image-level averages of the four most clinically useful echocardiographic views. The proposed system can automatically and accurately identify ASD, laying a good foundation for the subsequent artificial intelligence diagnosis of CHDs.
Collapse
Affiliation(s)
- Wenjing Hong
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Qiuyang Sheng
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Bin Dong
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Lanping Wu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Lijun Chen
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Leisheng Zhao
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqing Liu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Junxue Zhu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yiman Liu
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yixin Xie
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yizhou Yu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Hansong Wang
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Jiajun Yuan
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Tong Ge
- Pediatric Artificial Intelligence Clinical Application and Research Center, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Liebin Zhao
- Shanghai Engineering Research Center of Intelligence Pediatrics (SERCIP), Shanghai, China
| | - Xiaoqing Liu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Yuqi Zhang
- Department of Pediatric Cardiology, Shanghai Children’s Medical Center, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
29
|
Lu J, Ouyang X, Shen X, Liu T, Cui Z, Wang Q, Shen D. GAN-guided Deformable Attention Network for Identifying Thyroid Nodules in Ultrasound Images. IEEE J Biomed Health Inform 2022; 26:1582-1590. [PMID: 35196250 DOI: 10.1109/jbhi.2022.3153559] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Early detection and identification of malignant thyroid nodules, a vital precursory to the treatment, is a difficult task even for experienced clinicians. Many Computer-Aided Diagnose (CAD) systems have been developed to assist clinicians in performing this task on ultrasonic images. Learning-based CAD systems for thyroid nodules generally accommodate both nodule detection/segmentation and fine-grained classification for its malignancy, and prior researches often treat aforementioned tasks in separate stages, leading to additional computational costs. In this paper, we utilize an online class activation mapping (CAM) mechanism to guide the network to learn discriminative features for identifying thyroid nodules in ultrasound images, called \textit{CAM attention network}. It takes nodule masks as localization cues for direct spatial attention of the classification module, thereby avoiding isolated training for classification. Meanwhile, we propose a deformable convolution module to add offsets to the regular grid sampling locations in the standard convolution, guiding the network to capture more discriminative features of nodule areas. Furthermore, we use a generative adversarial network (GAN) [1] to ensure reliable deformations of nodules from the deformable convolution module. Our proposed CAM attention network has already achieved the 2nd place in the classification task of TN-SCUI 2020, a MICCAI 2020 Challenge with the largest set of thyroid nodule ultrasound images according to our knowledge. The further inclusion of our proposed GAN-guided deformable module allows for capturing more fine-grained features between benign and malignant nodules, and further improves the classification accuracy to a new state-of-the-art level.
Collapse
|
30
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
31
|
Artificial Intelligence in Diagnostic Radiology: Where Do We Stand, Challenges, and Opportunities. J Comput Assist Tomogr 2022; 46:78-90. [PMID: 35027520 DOI: 10.1097/rct.0000000000001247] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT Artificial intelligence (AI) is the most revolutionizing development in the health care industry in the current decade, with diagnostic imaging having the greatest share in such development. Machine learning and deep learning (DL) are subclasses of AI that show breakthrough performance in image analysis. They have become the state of the art in the field of image classification and recognition. Machine learning deals with the extraction of the important characteristic features from images, whereas DL uses neural networks to solve such problems with better performance. In this review, we discuss the current applications of machine learning and DL in the field of diagnostic radiology.Deep learning applications can be divided into medical imaging analysis and applications beyond analysis. In the field of medical imaging analysis, deep convolutional neural networks are used for image classification, lesion detection, and segmentation. Also used are recurrent neural networks when extracting information from electronic medical records and to augment the use of convolutional neural networks in the field of image classification. Generative adversarial networks have been explicitly used in generating high-resolution computed tomography and magnetic resonance images and to map computed tomography images from the corresponding magnetic resonance imaging. Beyond image analysis, DL can be used for quality control, workflow organization, and reporting.In this article, we review the most current AI models used in medical imaging research, providing a brief explanation of the various models described in the literature within the past 5 years. Emphasis is placed on the various DL models, as they are the most state-of-art in imaging analysis.
Collapse
|
32
|
Ma J, Bao L, Lou Q, Kong D. Transfer learning for automatic joint segmentation of thyroid and breast lesions from ultrasound images. Int J Comput Assist Radiol Surg 2021; 17:363-372. [PMID: 34881409 DOI: 10.1007/s11548-021-02505-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Accepted: 09/17/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE It plays a significant role to accurately and automatically segment lesions from ultrasound (US) images in clinical application. Nevertheless, it is extremely challenging because distinct components of heterogeneous lesions are similar to background in US images. In our study, a transfer learning-based method is developed for full-automatic joint segmentation of nodular lesions. METHODS Transfer learning is a widely used method to build high performing computer vision models. Our transfer learning model is a novel type of densely connected convolutional network (SDenseNet). Specifically, we pre-train SDenseNet based on ImageNet dataset. Then our SDenseNet is designed as a multi-channel model (denoted Mul-DenseNet) for automatically jointly segmenting lesions. As comparison, our SDenseNet using different transfer learning is applied to segmenting nodules, respectively. In our study, we find that more datasets for pre-training and multiple pre-training do not always work in segmentation of nodules, and the performance of transfer learning depends on a judicious choice of dataset and characteristics of targets. RESULTS Experimental results illustrate a significant performance of the Mul-DenseNet compared to that of other methods in the study. Specially, for thyroid nodule segmentation, overlap metric (OM), dice ratio (DR), true-positive rate (TPR), false-positive rate (FPR) and modified Hausdorff distance (MHD) are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] mm, respectively; for breast nodule segmentation, OM, DR, TPR, FPR and MHD are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] mm, respectively. CONCLUSIONS The experimental results illustrate our transfer learning models are very effective in segmentation of lesions, which also demonstrate that it is potential of our proposed Mul-DenseNet model in clinical applications. This model can reduce heavy workload of the physicians so that it can avoid misdiagnosis cases due to excessive fatigue. Moreover, it is easy and reproducible to detect lesions without medical expertise.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Microelectronics, Shandong University, Jinan, China.,Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, Shenzhen, China.,State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Qiong Lou
- School of Science, Zhejiang University of Sciences and Technology, Hangzhou, China
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, China.
| |
Collapse
|
33
|
Daulatabad R, Vega R, Jaremko JL, Kapur J, Hareendranathan AR, Punithakumar K. Integrating User-Input into Deep Convolutional Neural Networks for Thyroid Nodule Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2637-2640. [PMID: 34891794 DOI: 10.1109/embc46164.2021.9629959] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Delineation of thyroid nodule boundaries is necessary for cancer risk assessment and accurate categorization of nodules. Clinicians often use manual or bounding-box approach for nodule assessment which leads to subjective results. Consequently, agreement in thyroid nodule categorization is poor even among experts. Computer-aided diagnosis systems could reduce this variability by minimizing the extent of user interaction and by providing precise nodule segmentations. In this study, we present a novel approach for effective thyroid nodule segmentation and tracking using a single user click on the region of interest. When a user clicks on an ultrasound sweep, our proposed model can predict nodule segmentation over the entire sequence of frames. Quantitative evaluations show that the proposed method out-performs the bounding box approach in terms of the dice score on a large dataset of 372 ultrasound images. The proposed approach saves expert time and reduces the potential variability in thyroid nodule assessment. The proposed one-click approach can save clinicians time required for annotating thyroid nodules within ultrasound images/sweeps. With minimal user interaction we would be able to identify the nodule boundary which can further be used for volumetric measurement and characterization of the nodule. This approach can also be extended for fast labeling of large thyroid imaging datasets suitable for training machine-learning based algorithms.
Collapse
|
34
|
Bassiouny R, Mohamed A, Umapathy K, Khan N. An Interpretable Object Detection-Based Model For The Diagnosis Of Neonatal Lung Diseases Using Ultrasound Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3029-3034. [PMID: 34891882 DOI: 10.1109/embc46164.2021.9630169] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Over the last few decades, Lung Ultrasound (LUS) has been increasingly used to diagnose and monitor different lung diseases in neonates. It is a noninvasive tool that allows a fast bedside examination while minimally handling the neonate. Acquiring a LUS scan is easy, but understanding the artifacts concerned with each respiratory disease is challenging. Mixed artifact patterns found in different respiratory diseases may limit LUS readability by the operator. While machine learning (ML), especially deep learning can assist in automated analysis, simply feeding the ultrasound images to an ML model for diagnosis is not enough to earn the trust of medical professionals. The algorithm should output LUS features that are familiar to the operator instead. Therefore, in this paper we present a unique approach for extracting seven meaningful LUS features that can be easily associated with a specific pathological lung condition: Normal pleura, irregular pleura, thick pleura, A- lines, Coalescent B-lines, Separate B-lines and Consolidations. These artifacts can lead to early prediction of infants developing later respiratory distress symptoms. A single multi-class region proposal-based object detection model faster-RCNN (fRCNN) was trained on lower posterior lung ultrasound videos to detect these LUS features which are further linked to four common neonatal diseases. Our results show that fRCNN surpasses single stage models such as RetinaNet and can successfully detect the aforementioned LUS features with a mean average precision of 86.4%. Instead of a fully automatic diagnosis from images without any interpretability, detection of such LUS features leave the ultimate control of diagnosis to the clinician, which can result in a more trustworthy intelligent system.
Collapse
|
35
|
Liang X, Huang Y, Cai Y, Liao J, Chen Z. A Computer-Aided Diagnosis System and Thyroid Imaging Reporting and Data System for Dual Validation of Ultrasound-Guided Fine-Needle Aspiration of Indeterminate Thyroid Nodules. Front Oncol 2021; 11:611436. [PMID: 34692466 PMCID: PMC8529148 DOI: 10.3389/fonc.2021.611436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 09/16/2021] [Indexed: 12/02/2022] Open
Abstract
Purpose The fully automatic AI-Sonic computer-aided design (CAD) system was employed for the detection and diagnosis of benign and malignant thyroid nodules. The aim of this study was to investigate the efficiency of the AI-Sonic CAD system with the use of a deep learning algorithm to improve the diagnostic accuracy of ultrasound-guided fine-needle aspiration (FNA). Methods A total of 138 thyroid nodules were collected from 124 patients and diagnosed by an expert, a novice, and the Thyroid Imaging Reporting and Data System (TI-RADS). Diagnostic efficiency and feasibility were compared among the expert, novice, and CAD system. The application of the CAD system to enhance the diagnostic efficiency of novices was assessed. Moreover, with the experience of the expert as the gold standard, the values of features detected by the CAD system were also analyzed. The efficiency of FNA was compared among the expert, novice, and CAD system to determine whether the CAD system is helpful for the management of FNA. Result In total, 56 malignant and 82 benign thyroid nodules were collected from the 124 patients (mean age, 46.4 ± 12.1 years; range, 12–70 years). The diagnostic area under the curve of the CAD system, expert, and novice were 0.919, 0.891, and 0.877, respectively (p < 0.05). In regard to feature detection, there was no significant differences in the margin and composition between the benign and malignant nodules (p > 0.05), while echogenicity and the existence of echogenic foci were of great significance (p < 0.05). For the recommendation of FNA, the results showed that the CAD system had better performance than the expert and novice (p < 0.05). Conclusions Precise diagnosis and recommendation of FNA are continuing hot topics for thyroid nodules. The CAD system based on deep learning had better accuracy and feasibility for the diagnosis of thyroid nodules, and was useful to avoid unnecessary FNA. The CAD system is potentially an effective auxiliary approach for diagnosis and asymptomatic screening, especially in developing areas.
Collapse
Affiliation(s)
- Xiaowen Liang
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Yingmin Huang
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Yongyi Cai
- Department of Ultrasound, Liwan Center Hospital of Guangzhou, Guangzhou, China
| | - Jianyi Liao
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Zhiyi Chen
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.,The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| |
Collapse
|
36
|
Cheng CY, Chiu IM, Hsu MY, Pan HY, Tsai CM, Lin CHR. Deep Learning Assisted Detection of Abdominal Free Fluid in Morison's Pouch During Focused Assessment With Sonography in Trauma. Front Med (Lausanne) 2021; 8:707437. [PMID: 34631730 PMCID: PMC8494971 DOI: 10.3389/fmed.2021.707437] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 08/25/2021] [Indexed: 12/03/2022] Open
Abstract
Background: The use of focused assessment with sonography in trauma (FAST) enables clinicians to rapidly screen for injury at the bedsides of patients. Pre-hospital FAST improves diagnostic accuracy and streamlines patient care, leading to dispositions to appropriate treatment centers. In this study, we determine the accuracy of artificial intelligence model-assisted free-fluid detection in FAST examinations, and subsequently establish an automated feedback system, which can help inexperienced sonographers improve their interpretation ability and image acquisition skills. Methods: This is a single-center study of patients admitted to the emergency room from January 2020 to March 2021. We collected 324 patient records for the training model, 36 patient records for validation, and another 36 patient records for testing. We balanced positive and negative Morison's pouch free-fluid detection groups in a 1:1 ratio. The deep learning (DL) model Residual Networks 50-Version 2 (ResNet50-V2) was used for training and validation. Results: The accuracy, sensitivity, and specificity of the model performance for ascites prediction were 0.961, 0.976, and 0.947, respectively, in the validation set and 0.967, 0.985, and 0.913, respectively, in the test set. Regarding feedback prediction, the model correctly classified qualified and non-qualified images with an accuracy of 0.941 in both the validation and test sets. Conclusions: The DL algorithm in ResNet50-V2 is able to detect free fluid in Morison's pouch with high accuracy. The automated feedback and instruction system could help inexperienced sonographers improve their interpretation ability and image acquisition skills.
Collapse
Affiliation(s)
- Chi-Yung Cheng
- Department of Emergency Medicine, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung, Taiwan.,Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - I-Min Chiu
- Department of Emergency Medicine, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung, Taiwan.,Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Ming-Ya Hsu
- Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Hsiu-Yung Pan
- Department of Emergency Medicine, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Chih-Min Tsai
- Department of Pediatrics, Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Chun-Hung Richard Lin
- Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| |
Collapse
|
37
|
Magrelli S, Valentini P, De Rose C, Morello R, Buonsenso D. Classification of Lung Disease in Children by Using Lung Ultrasound Images and Deep Convolutional Neural Network. Front Physiol 2021; 12:693448. [PMID: 34512375 PMCID: PMC8432935 DOI: 10.3389/fphys.2021.693448] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 08/05/2021] [Indexed: 01/12/2023] Open
Abstract
Bronchiolitis is the most common cause of hospitalization of children in the first year of life and pneumonia is the leading cause of infant mortality worldwide. Lung ultrasound technology (LUS) is a novel imaging diagnostic tool for the early detection of respiratory distress and offers several advantages due to its low-cost, relative safety, portability, and easy repeatability. More precise and efficient diagnostic and therapeutic strategies are needed. Deep-learning-based computer-aided diagnosis (CADx) systems, using chest X-ray images, have recently demonstrated their potential as a screening tool for pulmonary disease (such as COVID-19 pneumonia). We present the first computer-aided diagnostic scheme for LUS images of pulmonary diseases in children. In this study, we trained from scratch four state-of-the-art deep-learning models (VGG19, Xception, Inception-v3 and Inception-ResNet-v2) for detecting children with bronchiolitis and pneumonia. In our experiments we used a data set consisting of 5,907 images from 33 healthy infants, 3,286 images from 22 infants with bronchiolitis, and 4,769 images from 7 children suffering from bacterial pneumonia. Using four-fold cross-validation, we implemented one binary classification (healthy vs. bronchiolitis) and one three-class classification (healthy vs. bronchiolitis vs. bacterial pneumonia) out of three classes. Affine transformations were applied for data augmentation. Hyperparameters were optimized for the learning rate, dropout regularization, batch size, and epoch iteration. The Inception-ResNet-v2 model provides the highest classification performance, when compared with the other models used on test sets: for healthy vs. bronchiolitis, it provides 97.75% accuracy, 97.75% sensitivity, and 97% specificity whereas for healthy vs. bronchiolitis vs. bacterial pneumonia, the Inception-v3 model provides the best results with 91.5% accuracy, 91.5% sensitivity, and 95.86% specificity. We performed a gradient-weighted class activation mapping (Grad-CAM) visualization and the results were qualitatively evaluated by a pediatrician expert in LUS imaging: heatmaps highlight areas containing diagnostic-relevant LUS imaging-artifacts, e.g., A-, B-, pleural-lines, and consolidations. These complex patterns are automatically learnt from the data, thus avoiding hand-crafted features usage. By using LUS imaging, the proposed framework might aid in the development of an accessible and rapid decision support-method for diagnosing pulmonary diseases in children using LUS imaging.
Collapse
Affiliation(s)
| | - Piero Valentini
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.,Global Health Research Institute, Istituto di Igiene, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Cristina De Rose
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Rosa Morello
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Danilo Buonsenso
- Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy.,Global Health Research Institute, Istituto di Igiene, Università Cattolica del Sacro Cuore, Rome, Italy.,Dipartimento di Scienze Biotecnologiche di Base, Cliniche Intensivologiche e Perioperatorie, Università Cattolica del Sacro Cuore, Rome, Italy
| |
Collapse
|
38
|
|
39
|
Kim GR, Lee E, Kim HR, Yoon JH, Park VY, Kwak JY. Convolutional Neural Network to Stratify the Malignancy Risk of Thyroid Nodules: Diagnostic Performance Compared with the American College of Radiology Thyroid Imaging Reporting and Data System Implemented by Experienced Radiologists. AJNR Am J Neuroradiol 2021; 42:1513-1519. [PMID: 33985947 DOI: 10.3174/ajnr.a7149] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 03/06/2021] [Indexed: 12/12/2022]
Abstract
BACKGROUND AND PURPOSE Comparison of the diagnostic performance for thyroid cancer on ultrasound between a convolutional neural network and visual assessment by radiologists has been inconsistent. Thus, we aimed to evaluate the diagnostic performance of the convolutional neural network compared with the American College of Radiology Thyroid Imaging Reporting and Data System (TI-RADS) for the diagnosis of thyroid cancer using ultrasound images. MATERIALS AND METHODS From March 2019 to September 2019, seven hundred sixty thyroid nodules (≥10 mm) in 757 patients were diagnosed as benign or malignant through fine-needle aspiration, core needle biopsy, or an operation. Experienced radiologists assessed the sonographic descriptors of the nodules, and 1 of 5 American College of Radiology TI-RADS categories was assigned. The convolutional neural network provided malignancy risk percentages for nodules based on sonographic images. Sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were calculated with cutoff values using the Youden index and compared between the convolutional neural network and the American College of Radiology TI-RADS. Areas under the receiver operating characteristic curve were also compared. RESULTS Of 760 nodules, 176 (23.2%) were malignant. At an optimal threshold derived from the Youden index, sensitivity and negative predictive values were higher with the convolutional neural network than with the American College of Radiology TI-RADS (81.8% versus 73.9%, P = .009; 94.0% versus 92.2%, P = .046). Specificity, accuracy, and positive predictive values were lower with the convolutional neural network than with the American College of Radiology TI-RADS (86.1% versus 93.7%, P < .001; 85.1% versus 89.1%, P = .003; and 64.0% versus 77.8%, P < .001). The area under the curve of the convolutional neural network was higher than that of the American College of Radiology TI-RADS (0.917 versus 0.891, P = .017). CONCLUSIONS The convolutional neural network provided diagnostic performance comparable with that of the American College of Radiology TI-RADS categories assigned by experienced radiologists.
Collapse
Affiliation(s)
- G R Kim
- From the Department of Radiology (G.R.K., J.H.Y., V.Y.P., J.Y.K.), Severance Hospital, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - E Lee
- Department of Computational Science and Engineering (E.L.), Yonsei University, Seoul, Korea
| | - H R Kim
- Biostatistics Collaboration Unit (H.R.K.), Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea
| | - J H Yoon
- From the Department of Radiology (G.R.K., J.H.Y., V.Y.P., J.Y.K.), Severance Hospital, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - V Y Park
- From the Department of Radiology (G.R.K., J.H.Y., V.Y.P., J.Y.K.), Severance Hospital, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - J Y Kwak
- From the Department of Radiology (G.R.K., J.H.Y., V.Y.P., J.Y.K.), Severance Hospital, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
40
|
The value of the Demetics ultrasound-assisted diagnosis system in the differential diagnosis of benign from malignant thyroid nodules and analysis of the influencing factors. Eur Radiol 2021; 31:7936-7944. [PMID: 33856523 DOI: 10.1007/s00330-021-07884-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 02/18/2021] [Accepted: 03/15/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVES To evaluate the value of Demetics and to explore whether Demetics can help radiologists with varying years of experience in the differential diagnosis of benign from malignant thyroid nodules. METHODS The clinical application value of Demetics was assessed by comparing the diagnostic accuracy of radiologists before and after applying Demetics. This retrospective analysis included 284 thyroid nodules that underwent pathological examinations. Two different combined methods were applied. Using method 1: the original TI-RADS classification was forcibly upgraded or downgraded by one level when Demetics classified the thyroid nodules as malignant or benign. Using method 2: the TI-RADS and benign or malignant classification of the thyroid nodules were flexibly adjusted after the physician learned the Demetics' results. RESULTS Demetics exhibited a higher sensitivity than did junior radiologist 1 (pD1 = 0.029) and was similar in sensitivity to the two senior radiologists. Demetics had a higher AUC than both junior radiologists (pD1 = 0.042, pD2 = 0.038) and an AUC similar to that of the senior radiologists. The sensitivity (p = 0.035) and AUC (p = 0.031) of junior radiologist 1 and the specificity (p < 0.001) and AUC (p = 0.026) of junior radiologist 2 improved with combined method 1. The AUC of junior radiologist 2 improved with combined method 2 (p = 0.045). The factors influencing the diagnostic results of Demetics include sonographic signs (echogenicity and echogenic foci), contrast of the image, and nodule size. CONCLUSION Demetics exhibited high sensitivity and accuracy in the differential diagnosis of benign from malignant thyroid nodules. Demetics could improve the diagnostic accuracy of junior radiologists. KEY POINTS • Demetics exhibited a high sensitivity and accuracy in the differential diagnosis of benign from malignant thyroid nodules. • Demetics could improve the diagnostic accuracy of junior radiologists in the differential diagnosis of benign from malignant thyroid nodules. • Factors influencing the diagnostic results of Demetics include the sonographic signs (echogenicity and echogenic foci), contrast of the image, and nodule size.
Collapse
|
41
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
42
|
Zhu J, Zhang S, Yu R, Liu Z, Gao H, Yue B, Liu X, Zheng X, Gao M, Wei X. An efficient deep convolutional neural network model for visual localization and automatic diagnosis of thyroid nodules on ultrasound images. Quant Imaging Med Surg 2021; 11:1368-1380. [PMID: 33816175 DOI: 10.21037/qims-20-538] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Background The aim of this study was to construct a deep convolutional neural network (CNN) model for localization and diagnosis of thyroid nodules on ultrasound and evaluate its diagnostic performance. Methods We developed and trained a deep CNN model called the Brief Efficient Thyroid Network (BETNET) using 16,401 ultrasound images. According to the parameters of the model, we developed a computer-aided diagnosis (CAD) system to localize and differentiate thyroid nodules. The validation dataset (1,000 images) was used to compare the diagnostic performance of the model using three state-of-the-art algorithms. We used an internal test set (300 images) to evaluate the BETNET model by comparing it with diagnoses from five radiologists with varying degrees of experience in thyroid nodule diagnosis. Lastly, we demonstrated the general applicability of our artificial intelligence (AI) system for diagnosing thyroid cancer in an external test set (1,032 images). Results The BETNET model accurately detected thyroid nodules in visualization experiments. The model demonstrated higher values for area under the receiver operating characteristic (AUC-ROC) curve [0.983, 95% confidence interval (CI): 0.973-0.990], sensitivity (99.19%), accuracy (98.30%), and Youden index (0.9663) than the three state-of-the-art algorithms (P<0.05). In the internal test dataset, the diagnostic accuracy of the BETNET model was 91.33%, which was markedly higher than the accuracy of one experienced (85.67%) and two less experienced radiologists (77.67% and 69.33%). The area under the ROC curve of the BETNET model (0.951) was similar to that of the two highly skilled radiologists (0.940 and 0.953) and significantly higher than that of one experienced and two less experienced radiologists (P<0.01). The kappa coefficient of the BETNET model and the pathology results showed good agreement (0.769). In addition, the BETNET model achieved an excellent diagnostic performance (AUC =0.970, 95% CI: 0.958-0.980) when applied to ultrasound images from another independent hospital. Conclusions We developed a deep learning model which could accurately locate and automatically diagnose thyroid nodules on ultrasound images. The BETNET model exhibited better diagnostic performance than three state-of-the-art algorithms, which in turn performed similarly in diagnosis as the experienced radiologists. The BETNET model has the potential to be applied to ultrasound images from other hospitals.
Collapse
Affiliation(s)
- Jialin Zhu
- Department of Diagnostic and Therapeutic Ultrasonography, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Sheng Zhang
- Department of Diagnostic and Therapeutic Ultrasonography, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Ruiguo Yu
- College of Intelligence and Computing, Tianjin University, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin Key Laboratory of Advanced Networking, Tianjin, China
| | - Zhiqiang Liu
- College of Intelligence and Computing, Tianjin University, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin Key Laboratory of Advanced Networking, Tianjin, China
| | - Hongyan Gao
- Tianjin Xiqing District Women and Children's Health and Family Planning Service Center, Tianjin, China
| | - Bing Yue
- Department of Diagnostic and Therapeutic Ultrasonography, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Xun Liu
- Department of Ultrasonography, the Fifth Central Hospital of Tianjin, Tianjin, China
| | - Xiangqian Zheng
- Department of Thyroid and Neck Tumor, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Ming Gao
- Department of Thyroid and Neck Tumor, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Xi Wei
- Department of Diagnostic and Therapeutic Ultrasonography, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, China
| |
Collapse
|
43
|
|
44
|
Thyroid nodule recognition using a joint convolutional neural network with information fusion of ultrasound images and radiofrequency data. Eur Radiol 2021; 31:5001-5011. [PMID: 33409774 DOI: 10.1007/s00330-020-07585-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 11/06/2020] [Accepted: 12/01/2020] [Indexed: 01/25/2023]
Abstract
OBJECTIVE To develop a deep learning-based method with information fusion of US images and RF signals for better classification of thyroid nodules (TNs). METHODS One hundred sixty-three pairs of US images and RF signals of TNs from a cohort of adult patients were used for analysis. We developed an information fusion-based joint convolutional neural network (IF-JCNN) for the differential diagnosis of malignant and benign TNs. The IF-JCNN contains two branched CNNs for deep feature extraction: one for US images and the other one for RF signals. The extracted features are fused at the backend of IF-JCNN for TN classification. RESULTS Across 5-fold cross-validation, the accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) obtained by using the IF-JCNN with both US images and RF signals as inputs for TN classification were respectively 0.896 (95% CI 0.838-0.938), 0.885 (95% CI 0.804-0.941), 0.910 (95% CI 0.815-0.966), and 0.956 (95% CI 0.926-0.987), which were better than those obtained by using only US images: 0.822 (0.755-0.878; p = 0.0044), 0.792 (0.679-0.868, p = 0.0091), 0.866 (0.760-0.937, p = 0.197), and 0.901 (0.855-0.948, p = .0398), or RF signals: 0.767 (0.694-0.829, p < 0.001), 0.781 (0.685-0.859, p = 0.0037), 0.746 (0.625-0.845, p < 0.001), 0.845 (0.786-0.903, p < 0.001). CONCLUSIONS The proposed IF-JCNN model filled the gap of just using US images in CNNs to characterize TNs, and it may serve as a promising tool for assisting the diagnosis of thyroid cancer. KEY POINTS • Raw radiofrequency signals before ultrasound imaging of thyroid nodules provide useful information that is not carried by ultrasound images. • The information carried by raw radiofrequency signals and ultrasound images for thyroid nodules is complementary. • The performance of deep convolutional neural network for diagnosing thyroid nodules can be significantly improved by fusing US images and RF signals in the model as compared with just using US images.
Collapse
|
45
|
CARNet: Automatic Cerebral Aneurysm Classification in Time-of-Flight MR Angiography by Leveraging Recurrent Neural Networks. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Pasyar P, Mahmoudi T, Kouzehkanan SZM, Ahmadian A, Arabalibeik H, Soltanian N, Radmard AR. Hybrid classification of diffuse liver diseases in ultrasound images using deep convolutional neural networks. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2020.100496] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
|
47
|
Yang B, Yan M, Yan Z, Zhu C, Xu D, Dong F. Segmentation and classification of thyroid follicular neoplasm using cascaded convolutional neural network. Phys Med Biol 2020; 65:245040. [PMID: 33137800 DOI: 10.1088/1361-6560/abc6f2] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
In this paper, we present a segmentation and classification method for thyroid follicular neoplasms based on a combination of the prior-based level set method and deep convolutional neural network. The proposed method aims to discriminate thyroid follicular adenoma (TFA) and follicular thyroid carcinoma (FTC) in ultrasound images. In their appearance, these two kinds of tumours have similar shapes, sizes and contrasts. Therefore, it is difficult for even ultrasound specialists to distinguish them. Because of the complex background in thyroid ultrasound images, before distinguishing TFA and FTC, we need to segment the lesions from the whole image for each patient. The main challenge of segmentation is that the images often have weak edges and heterogeneous regions. The main issue of classification is that the accuracy depends on the features extracted from the segmentation results. To solve these problems, we conduct the two tasks, i.e. segmentation and classification, by a cascaded learning architecture. For segmentation, to obtain more accurate results, we exploit the Res-U-net framework and the prior-based level set method to enhance their respective abilities. Then, the classification network is trained by sharing shallow layers of the segmentation network. Testing the proposed method on real patient data shows that it is able to segment the lesion areas in thyroid ultrasound images with a Dice score of 92.65% and to distinguish TFA and FTC with a classification accuracy of 96.00%.
Collapse
Affiliation(s)
- Bailin Yang
- School of Computer and Information Engineering, Zhejiang Gongshang University, Hangzhou 310018, People's Republic of China. Bailin Yang and Meiying Yan are co-first authors
| | | | | | | | | | | |
Collapse
|
48
|
Yoon J, Lee E, Koo JS, Yoon JH, Nam KH, Lee J, Jo YS, Moon HJ, Park VY, Kwak JY. Artificial intelligence to predict the BRAFV600E mutation in patients with thyroid cancer. PLoS One 2020; 15:e0242806. [PMID: 33237975 PMCID: PMC7688114 DOI: 10.1371/journal.pone.0242806] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 11/09/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose To investigate whether a computer-aided diagnosis (CAD) program developed using the deep learning convolutional neural network (CNN) on neck US images can predict the BRAFV600E mutation in thyroid cancer. Methods 469 thyroid cancers in 469 patients were included in this retrospective study. A CAD program recently developed using the deep CNN provided risks of malignancy (0–100%) as well as binary results (cancer or not). Using the CAD program, we calculated the risk of malignancy based on a US image of each thyroid nodule (CAD value). Univariate and multivariate logistic regression analyses were performed including patient demographics, the American College of Radiology (ACR) Thyroid Imaging, Reporting and Data System (TIRADS) categories and risks of malignancy calculated through CAD to identify independent predictive factors for the BRAFV600E mutation in thyroid cancer. The predictive power of the CAD value and final multivariable model for the BRAFV600E mutation in thyroid cancer were measured using the area under the receiver operating characteristic (ROC) curves. Results In this study, 380 (81%) patients were positive and 89 (19%) patients were negative for the BRAFV600E mutation. On multivariate analysis, older age (OR = 1.025, p = 0.018), smaller size (OR = 0.963, p = 0.006), and higher CAD value (OR = 1.016, p = 0.004) were significantly associated with the BRAFV600E mutation. The CAD value yielded an AUC of 0.646 (95% CI: 0.576, 0.716) for predicting the BRAFV600E mutation, while the multivariable model yielded an AUC of 0.706 (95% CI: 0.576, 0.716). The multivariable model showed significantly better performance than the CAD value alone (p = 0.004). Conclusion Deep learning-based CAD for thyroid US can help us predict the BRAFV600E mutation in thyroid cancer. More multi-center studies with more cases are needed to further validate our study results.
Collapse
Affiliation(s)
- Jiyoung Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, Seoul, South Korea
| | - Eunjung Lee
- Department of Computational Science and Engineering, Yonsei University, Seoul, South Korea
| | - Ja Seung Koo
- Department of Pathology, Severance Hospital, Yonsei University, College of Medicine, Seoul, South Korea
| | - Jung Hyun Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, Seoul, South Korea
| | - Kee-Hyun Nam
- Department of Surgery, Severance Hospital, Yonsei University, College of Medicine, Seoul, South Korea
| | - Jandee Lee
- Department of Surgery, Severance Hospital, Yonsei University, College of Medicine, Seoul, South Korea
| | - Young Suk Jo
- Department of Internal Medicine, Severance Hospital, Yonsei University, College of Medicine, Seoul, South Korea
| | - Hee Jung Moon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, Seoul, South Korea
| | - Vivian Youngjean Park
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, Seoul, South Korea
| | - Jin Young Kwak
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University, College of Medicine, Seoul, South Korea
- * E-mail:
| |
Collapse
|
49
|
Zhang L, Zhuang Y, Hua Z, Han L, Li C, Chen K, Peng Y, Lin J. Automated location of thyroid nodules in ultrasound images with improved YOLOV3 network. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2020; 29:75-90. [PMID: 33136086 DOI: 10.3233/xst-200775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
BACKGROUND Thyroid ultrasonography is widely used to diagnose thyroid nodules in clinics. Automatic localization of nodules can promote the development of intelligent thyroid diagnosis and reduce workload of radiologists. However, besides the ultrasound image has low contrast and high noise, the thyroid nodules are diverse in shape and vary greatly in size. Thus, thyroid nodule detection in ultrasound images is still a challenging task. OBJECTIVE This study proposes an automatic detection algorithm to locate nodules in B ultrasound images and Doppler ultrasound images. This method can be used to screen thyroid nodules and provide a basis for subsequent automatic segmentation and intelligent diagnosis. METHODS We develop and optimize an improved YOLOV3 model for detecting thyroid nodules in ultrasound images with B-mode and Doppler mode. Improvements include (1) using the high-resolution network (HRNet) as the basic network for gradually extracting high-level semantic features to reduce the missed detection and misdetection, (2) optimizing the loss function for single target detection like nodules, and (3) obtaining the anchor boxes by clustering the candidate frames of real nodules in the dataset. RESULTS The experimental results of applying to 8000 clinical ultrasound images show that the new method developed and tested in this study can effectively detect thyroid nodules. The method achieves 94.53% mean precision and 95.00% mean recall. CONCLUTIONS The study demonstrates a new automated method that enables to achieve high detection accuracy and effectively locate thyroid nodules in various ultrasound images without any user interaction, which indicates its potential clinical application value for the thyroid nodule screening.
Collapse
Affiliation(s)
- Ling Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing, China
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu, China.,Highong Intellimage Medical Technology Tianjin Co., Ltd, Tianjin, China
| | - Cheng Li
- China-Japan Friendship Hospital, Beijing, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Yulan Peng
- West China Hospital of Sichuan University, Chengdu, China
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| |
Collapse
|
50
|
Computer aided diagnosis of thyroid nodules based on the devised small-datasets multi-view ensemble learning. Med Image Anal 2020; 67:101819. [PMID: 33049580 DOI: 10.1016/j.media.2020.101819] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 07/12/2020] [Accepted: 09/08/2020] [Indexed: 11/22/2022]
Abstract
With the development of deep learning, its application in diagnosis of benign and malignant thyroid nodules has been widely concerned. However, it is difficult to obtain medical images, resulting in insufficient number of data, which contradicts the large amount of data required for acquiring effective deep learning diagnostic models. A multi-view ensemble learning based on voting mechanism is proposed herein to boost the performance of the models trained by small-dataset thyroid nodule ultrasound images. The method integrates three kinds of diagnosis results which are obtained from 3-view dataset which is composed of thyroid nodule ultrasound images, medical features extracted based on U-Net output and useful features selected by mRMR from the statistical features and texture features. To obtain preliminary diagnosis results, the images are utilized for training GoogleNet. For improving the results, supplementary methods were proposed based on the medical features and the selected features. To analyze the contribution of these features and acquire two groups of diagnosis results, the designed Xgboost classifier is utilized for obtaining two groups of features respectively. Subsequently, the boosting final results are obtained through majority voting mechanism. Furthermore, the proposed method is utilized to diagnose sequence images (the images extracted by frame from videos) to solve the poor results caused by slight differences. Finally, better final results are obtained for both of the normal dataset and the sequence dataset (consisting of sequence images). Compared with the accuracies obtained by only training deep learning models with small datasets, the diagnostic accuracies of the above two datasets are improved to 92.11% and 92.54% respectively by utilizing the proposed method.
Collapse
|