1
|
Wang X, Yang YQ, Cai S, Li JC, Wang HY. Deep-learning-based sampling position selection on color Doppler sonography images during renal artery ultrasound scanning. Sci Rep 2024; 14:11768. [PMID: 38782971 PMCID: PMC11116437 DOI: 10.1038/s41598-024-60355-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
Accurate selection of sampling positions is critical in renal artery ultrasound examinations, and the potential of utilizing deep learning (DL) for assisting in this selection has not been previously evaluated. This study aimed to evaluate the effectiveness of DL object detection technology applied to color Doppler sonography (CDS) images in assisting sampling position selection. A total of 2004 patients who underwent renal artery ultrasound examinations were included in the study. CDS images from these patients were categorized into four groups based on the scanning position: abdominal aorta (AO), normal renal artery (NRA), renal artery stenosis (RAS), and intrarenal interlobular artery (IRA). Seven object detection models, including three two-stage models (Faster R-CNN, Cascade R-CNN, and Double Head R-CNN) and four one-stage models (RetinaNet, YOLOv3, FoveaBox, and Deformable DETR), were trained to predict the sampling position, and their predictive accuracies were compared. The Double Head R-CNN model exhibited significantly higher average accuracies on both parameter optimization and validation datasets (89.3 ± 0.6% and 88.5 ± 0.3%, respectively) compared to other methods. On clinical validation data, the predictive accuracies of the Double Head R-CNN model for all four types of images were significantly higher than those of the other methods. The DL object detection model shows promise in assisting inexperienced physicians in improving the accuracy of sampling position selection during renal artery ultrasound examinations.
Collapse
Affiliation(s)
- Xin Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China
| | - Yu-Qing Yang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Sheng Cai
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China
| | - Jian-Chu Li
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China.
| | - Hong-Yan Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China.
| |
Collapse
|
2
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
3
|
Shen W, Sun Y, Song Y, Cui L, Jiang L. Learning curve in ultrasound-guided vacuum-assisted excision of breast lesions for surgeons and ultrasound physicians. Quant Imaging Med Surg 2023; 13:1478-1487. [PMID: 36915354 PMCID: PMC10006116 DOI: 10.21037/qims-22-573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/12/2022] [Indexed: 02/10/2023]
Abstract
Background The varying experience of surgeons and ultrasound physicians, and their collaboration with physicians, may affect operation time and efficiency. We evaluated the learning curve of ultrasound-guided vacuum-assisted excision (VAE) of breast lesion with collaboration between different physicians, and assessed characteristics associated with operation time. Methods The sample population of this retrospective study was divided into two groups: 49 consecutive patient surgeries completed by skilled surgeons and novice ultrasound physicians (U group); and 30 consecutive patient surgeries completed by skilled ultrasound physicians and novice surgeons (S group). Cumulative summation graphs were used to evaluate operation time and calculate the turning point of the learning curve. Patients in the U and S groups were divided into exploration stage and proficiency stage according to the turning point, and the differences in influencing factors were compared. A total of 548 patients who underwent vacuum-assisted breast excision performed by a combination of skilled surgeons and skilled ultrasound physicians were selected as the reference group (R group). The differences among the three groups were compared. The relationship between the operation time and other factors in the different groups was analyzed using linear regression. Results The best learning curve of the sample population was the quadratic fitting equation, and the turning point was the 19th case in the U group and the 14th case in the S group. The total operation times in the proficiency stage were significantly shorter than those in the exploration stage in the U and S groups (P=0.012 and P=0.003, separately). Patient age, long diameter, short diameter, and depth of masses related to the operation time. Conclusions Our data suggest the existence of different learning curves in ultrasound-guided vacuum-assisted excision for the collaborations of different stages surgeons and ultrasound physicians. Through the accumulation of experience, it is feasible to safely perform ultrasound-guided VAE of breast lesions.
Collapse
Affiliation(s)
- Weiwei Shen
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Yan Sun
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Yantao Song
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Ligang Cui
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Ling Jiang
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| |
Collapse
|
4
|
Zheng Q, Liu B, Tong X, Liu J, Wang J, Zhang L. Automated measurement of leg length discrepancy from infancy to adolescence based on cascaded LLDNet and comprehensive assessment. Quant Imaging Med Surg 2023; 13:852-864. [PMID: 36819275 PMCID: PMC9929401 DOI: 10.21037/qims-22-282] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 10/25/2022] [Indexed: 11/15/2022]
Abstract
Background Deep learning (DL) has been suggested for the automated measurement of leg length discrepancy (LLD) on radiographs, which could free up time for pediatric radiologists to focus on value-adding duties. The purpose of our study was to develop a unified solution using DL for both automated LLD measurements and comprehensive assessments in a large and comprehensive radiographic dataset covering children at all stages, from infancy to adolescence, and with a wide range of diagnoses. Methods The bilateral femurs and tibias were segmented by a cascaded convolutional neural network (CNN), referred to as LLDNet. Each LLDNet was conducted through use of residual blocks to learn more abundant features, a residual convolutional block attention module (Res-CBAM) to integrate both spatial and channel attention mechanisms, and an attention gate structure to alleviate the semantic gap. The leg length was calculated by localizing anatomical landmarks and computing the distances between them. A comprehensive assessment based on 9 indices (5 similarity indices and 4 stability indices) and the paired Wilcoxon signed-rank test was undertaken to demonstrate the superiority of the cascaded LLDNet for segmenting pediatric legs through comparison with alternative DL models, including ResUNet, TransUNet, and the single LLDNet. Furthermore, the consistency between the ground truth and the DL-calculated measurements of leg length was also comprehensively evaluated, based on 5 indices and a Bland-Altman analysis. The sensitivity and specificity of LLD >5 mm were also calculated. Results A total of 976 children were identified (0-19 years old; male/female 522/454; 520 children between 0 and 2 years, 456 children older than 2 years, 4 children excluded). Experiments demonstrated that the proposed cascaded LLDNet achieved the best pediatric leg segmentation in both similarity indices (0.5-1% increase; P<0.05) and stability indices (13-47% percentage decrease; P<0.05) compared with the alternative DL methods. A high consistency of LLD measurements between DL and the ground truth was also observed using Bland-Altman analysis [Pearson correlation coefficient (PCC) =0.94; mean bias =0.003 cm]. The sensitivity and specificity established for LLD >5 mm were 0.792 and 0.962, respectively, while those for LLD >10 mm were 0.938 and 0.992, respectively. Conclusions The cascaded LLDNet was able to achieve promising pediatric leg segmentation and LLD measurement on radiography. A comprehensive assessment in terms of similarity, stability, and measurement consistency is essential in computer-aided LLD measurement of pediatric patients.
Collapse
Affiliation(s)
- Qiang Zheng
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Bin Liu
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Jungang Liu
- Department of Radiology, Xiamen Children’s Hospital, Children’s Hospital of Fudan University at Xiamen, Xiamen, China
| | - Jian Wang
- Department of Radiology, Xiamen Children’s Hospital, Children’s Hospital of Fudan University at Xiamen, Xiamen, China
| | - Lin Zhang
- Department of Radiology, Xiamen Children’s Hospital, Children’s Hospital of Fudan University at Xiamen, Xiamen, China
| |
Collapse
|
5
|
Artificial Intelligence in Breast Ultrasound: From Diagnosis to Prognosis-A Rapid Review. Diagnostics (Basel) 2022; 13:diagnostics13010058. [PMID: 36611350 PMCID: PMC9818181 DOI: 10.3390/diagnostics13010058] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Ultrasound (US) is a fundamental diagnostic tool in breast imaging. However, US remains an operator-dependent examination. Research into and the application of artificial intelligence (AI) in breast US are increasing. The aim of this rapid review was to assess the current development of US-based artificial intelligence in the field of breast cancer. METHODS Two investigators with experience in medical research performed literature searching and data extraction on PubMed. The studies included in this rapid review evaluated the role of artificial intelligence concerning BC diagnosis, prognosis, molecular subtypes of breast cancer, axillary lymph node status, and the response to neoadjuvant chemotherapy. The mean values of sensitivity, specificity, and AUC were calculated for the main study categories with a meta-analytical approach. RESULTS A total of 58 main studies, all published after 2017, were included. Only 9/58 studies were prospective (15.5%); 13/58 studies (22.4%) used an ML approach. The vast majority (77.6%) used DL systems. Most studies were conducted for the diagnosis or classification of BC (55.1%). At present, all the included studies showed that AI has excellent performance in breast cancer diagnosis, prognosis, and treatment strategy. CONCLUSIONS US-based AI has great potential and research value in the field of breast cancer diagnosis, treatment, and prognosis. More prospective and multicenter studies are needed to assess the potential impact of AI in breast ultrasound.
Collapse
|
6
|
Hayashida T, Odani E, Kikuchi M, Nagayama A, Seki T, Takahashi M, Futatsugi N, Matsumoto A, Murata T, Watanuki R, Yokoe T, Nakashoji A, Maeda H, Onishi T, Asaga S, Hojo T, Jinno H, Sotome K, Matsui A, Suto A, Imoto S, Kitagawa Y. Establishment of a deep-learning system to diagnose BI-RADS4a or higher using breast ultrasound for clinical application. Cancer Sci 2022; 113:3528-3534. [PMID: 35880248 PMCID: PMC9530860 DOI: 10.1111/cas.15511] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 07/16/2022] [Accepted: 07/19/2022] [Indexed: 11/27/2022] Open
Abstract
Although the categorization of ultrasound using the Breast Imaging Reporting and Data System (BI‐RADS) has become widespread worldwide, the problem of inter‐observer variability remains. To maintain uniformity in diagnostic accuracy, we have developed a system in which artificial intelligence (AI) can distinguish whether a static image obtained using a breast ultrasound represents BI‐RADS3 or lower or BI‐RADS4a or higher to determine the medical management that should be performed on a patient whose breast ultrasound shows abnormalities. To establish and validate the AI system, a training dataset consisting of 4028 images containing 5014 lesions and a test dataset consisting of 3166 images containing 3656 lesions were collected and annotated. We selected a setting that maximized the area under the curve (AUC) and minimized the difference in sensitivity and specificity by adjusting the internal parameters of the AI system, achieving an AUC, sensitivity, and specificity of 0.95, 91.2%, and 90.7%, respectively. Furthermore, based on 30 images extracted from the test data, the diagnostic accuracy of 20 clinicians and the AI system was compared, and the AI system was found to be significantly superior to the clinicians (McNemar test, p < 0.001). Although deep‐learning methods to categorize benign and malignant tumors using breast ultrasound have been extensively reported, our work represents the first attempt to establish an AI system to classify BI‐RADS3 or lower and BI‐RADS4a or higher successfully, providing important implications for clinical actions. These results suggest that the AI diagnostic system is sufficient to proceed to the next stage of clinical application.
Collapse
Affiliation(s)
- Tetsu Hayashida
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Erina Odani
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Masayuki Kikuchi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Aiko Nagayama
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Tomoko Seki
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Maiko Takahashi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | | | - Akiko Matsumoto
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Takeshi Murata
- Department of Breast Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Rurina Watanuki
- Department of Breast Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Takamichi Yokoe
- Department of Breast Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Ayako Nakashoji
- Department of Breast Surgery, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Hinako Maeda
- Department of Breast and Thyroid Surgery, Kitasato University Kitasato Institute Hospital, Tokyo, Japan
| | - Tatsuya Onishi
- Department of Breast Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Sota Asaga
- Department of Breast Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Takashi Hojo
- Dept. of Breast Oncology, Saitama Medical University International Medical Center, Saitama, Japan
| | - Hiromitsu Jinno
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Keiichi Sotome
- Department of Breast and Thyroid Surgery, Kitasato University Kitasato Institute Hospital, Tokyo, Japan
| | - Akira Matsui
- Department of Breast Surgery, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Akihiko Suto
- Department of Breast Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Shigeru Imoto
- Department of Breast Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
7
|
Intelligent Research Based on Deep Learning Recognition Method in Vehicle-Road Cooperative Information Interaction System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4921211. [PMID: 35814543 PMCID: PMC9259250 DOI: 10.1155/2022/4921211] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 05/03/2022] [Accepted: 05/25/2022] [Indexed: 11/17/2022]
Abstract
The vehicle-road collaborative information interaction system is an emerging technology system that realizes the sharing of information between vehicles, vehicles and roads between traffic road information, and driving vehicle information. It is of positive significance for improving the urban transportation construction system and promoting urban economic development. This paper conducts intelligent research on the deep learning recognition method based on the vehicle-road collaborative information interaction system. First, this article comprehensively expounds the concept of the vehicle-road collaborative information interaction system and then introduces the specific components, functions, and applications of the system structure. Then, this article researches on deep learning recognition methods and introduces three deep learning recognition methods. They are background extraction method, YOLOv2 method, and DeepSORT method. Finally, this paper conducts simulation comparison experiments between deep learning algorithms and traditional algorithms. It evaluates the feasibility of the algorithm in the vehicle-road collaborative information interaction system in three aspects: vehicle target detection, vehicle flow identification, and emergency decision-making. The experimental results show that the value of the intersection ratio of vehicle target detection in the deep learning recognition method is 8.66% higher than that of the traditional algorithm, the recall rate is 7% higher than that of the traditional algorithm, and the vehicle flow recognition accuracy is 1.8% higher than that of the traditional algorithm. The early warning time in emergency decision-making is also shorter than that of traditional algorithms, which shows the unique superiority and feasibility of deep learning algorithms in the vehicle-road collaborative information interaction system.
Collapse
|
8
|
Abstract
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed.
Collapse
|
9
|
Li M, Wan C. The use of deep learning technology for the detection of optic neuropathy. Quant Imaging Med Surg 2022; 12:2129-2143. [PMID: 35284277 PMCID: PMC8899937 DOI: 10.21037/qims-21-728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 10/26/2021] [Indexed: 03/14/2024]
Abstract
The emergence of computer graphics processing units (GPUs), improvements in mathematical models, and the availability of big data, has allowed artificial intelligence (AI) to use machine learning and deep learning (DL) technology to achieve robust performance in various fields of medicine. The DL system provides improved capabilities, especially in image recognition and image processing. Recent progress in the sorting of AI data sets has stimulated great interest in the development of DL algorithms. Compared with subjective evaluation and other traditional methods, DL algorithms can identify diseases faster and more accurately in diagnostic tests. Medical imaging is of great significance in the clinical diagnosis and individualized treatment of ophthalmic diseases. Based on the morphological data sets of millions of data points, various image-related diagnostic techniques can now impart high-resolution information on anatomical and functional changes, thereby providing unprecedented insights in ophthalmic clinical practice. As ophthalmology relies heavily on imaging examinations, it is one of the first medical fields to apply DL algorithms in clinical practice. Such algorithms can assist in the analysis of large amounts of data acquired from the examination of auxiliary images. In recent years, rapid advancements in imaging technology have facilitated the application of DL in the automatic identification and classification of pathologies that are characteristic of ophthalmic diseases, thereby providing high quality diagnostic information. This paper reviews the origins, development, and application of DL technology. The technical and clinical problems associated with building DL systems to meet clinical needs and the potential challenges of clinical application are discussed, especially in relation to the field of optic nerve diseases.
Collapse
Affiliation(s)
- Mei Li
- Department of Ophthalmology, Yanan People’s Hospital, Yanan, China
| | - Chao Wan
- Department of Ophthalmology, the First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
10
|
Wang B, Wan Z, Li C, Zhang M, Shi Y, Miao X, Jian Y, Luo Y, Yao J, Tian W. Identification of benign and malignant thyroid nodules based on dynamic AI ultrasound intelligent auxiliary diagnosis system. Front Endocrinol (Lausanne) 2022; 13:1018321. [PMID: 36237194 PMCID: PMC9551607 DOI: 10.3389/fendo.2022.1018321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 09/06/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Dynamic artificial intelligence (AI) ultrasound intelligent auxiliary diagnosis system (Dynamic AI) is a joint application of AI technology and medical imaging data, which can perform a real-time synchronous dynamic analysis of nodules. The aim of this study is to investigate the value of dynamic AI in differentiating benign and malignant thyroid nodules and its guiding significance for treatment strategies. METHODS The data of 607 patients with 1007 thyroid nodules who underwent surgical treatment were reviewed and analyzed, retrospectively. Dynamic AI was used to differentiate benign and malignant nodules. The diagnostic efficacy of dynamic AI was evaluated by comparing the results of dynamic AI examination, preoperative fine needle aspiration cytology (FNAC) and postoperative pathology of nodules with different sizes and properties in patients of different sexes and ages. RESULTS The sensitivity, specificity and accuracy of dynamic AI in the diagnosis of thyroid nodules were 92.21%, 83.20% and 89.97%, respectively, which were highly consistent with the postoperative pathological results (kappa = 0.737, p < 0.001). There is no statistical difference in accuracy between people with different ages and sexes and nodules of different sizes, which showed the good stability. The accuracy of dynamic AI in malignant nodules (92.21%) was significantly higher than that in benign nodules (83.20%) (p < 0.001). The specificity and positive predictive value were significantly higher, and the misdiagnosis rate was significantly lower in dynamic AI than that of preoperative ultrasound ACR TI-RADS (p < 0.001). The accuracy of dynamic AI in nodules with diameter ≤ 0.50 cm was significantly higher than that of preoperative ultrasound (p = 0.044). Compared with FNAC, the sensitivity (96.58%) and accuracy (94.06%) of dynamic AI were similar. CONCLUSIONS The dynamic AI examination has high diagnostic value for benign and malignant thyroid nodules, which can effectively assist surgeons in formulating scientific and reasonable individualized diagnosis and treatment strategies for patients.
Collapse
Affiliation(s)
- Bing Wang
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Zheng Wan
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Chen Li
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Mingbo Zhang
- Department of Ultrasound, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - YiLei Shi
- MedAI Technology (Wuxi) Co. Ltd, Wuxi, China
| | - Xin Miao
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Yanbing Jian
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Yukun Luo
- Department of Ultrasound, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Jing Yao
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
- *Correspondence: Jing Yao, ; Wen Tian,
| | - Wen Tian
- Senior Department of General Surgery, The First Medical Center of Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
- *Correspondence: Jing Yao, ; Wen Tian,
| |
Collapse
|