1
|
Sabeghi P, Kinkar KK, Castaneda GDR, Eibschutz LS, Fields BKK, Varghese BA, Patel DB, Gholamrezanezhad A. Artificial intelligence and machine learning applications for the imaging of bone and soft tissue tumors. FRONTIERS IN RADIOLOGY 2024; 4:1332535. [PMID: 39301168 PMCID: PMC11410694 DOI: 10.3389/fradi.2024.1332535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 08/01/2024] [Indexed: 09/22/2024]
Abstract
Recent advancements in artificial intelligence (AI) and machine learning offer numerous opportunities in musculoskeletal radiology to potentially bolster diagnostic accuracy, workflow efficiency, and predictive modeling. AI tools have the capability to assist radiologists in many tasks ranging from image segmentation, lesion detection, and more. In bone and soft tissue tumor imaging, radiomics and deep learning show promise for malignancy stratification, grading, prognostication, and treatment planning. However, challenges such as standardization, data integration, and ethical concerns regarding patient data need to be addressed ahead of clinical translation. In the realm of musculoskeletal oncology, AI also faces obstacles in robust algorithm development due to limited disease incidence. While many initiatives aim to develop multitasking AI systems, multidisciplinary collaboration is crucial for successful AI integration into clinical practice. Robust approaches addressing challenges and embodying ethical practices are warranted to fully realize AI's potential for enhancing diagnostic accuracy and advancing patient care.
Collapse
Affiliation(s)
- Paniz Sabeghi
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Ketki K Kinkar
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | | | - Liesl S Eibschutz
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Brandon K K Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Bino A Varghese
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Dakshesh B Patel
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Ali Gholamrezanezhad
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
2
|
Hasei J, Nakahara R, Otsuka Y, Nakamura Y, Hironari T, Kahara N, Miwa S, Ohshika S, Nishimura S, Ikuta K, Osaki S, Yoshida A, Fujiwara T, Nakata E, Kunisada T, Ozaki T. High-quality expert annotations enhance artificial intelligence model accuracy for osteosarcoma X-ray diagnosis. Cancer Sci 2024. [PMID: 39223070 DOI: 10.1111/cas.16330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 08/08/2024] [Accepted: 08/18/2024] [Indexed: 09/04/2024] Open
Abstract
Primary malignant bone tumors, such as osteosarcoma, significantly affect the pediatric and young adult populations, necessitating early diagnosis for effective treatment. This study developed a high-performance artificial intelligence (AI) model to detect osteosarcoma from X-ray images using highly accurate annotated data to improve diagnostic accuracy at initial consultations. Traditional models trained on unannotated data have shown limited success, with sensitivities of approximately 60%-70%. In contrast, our model used a data-centric approach with annotations from an experienced oncologist, achieving a sensitivity of 95.52%, specificity of 96.21%, and an area under the curve of 0.989. The model was trained using 468 X-ray images from 31 osteosarcoma cases and 378 normal knee images with a strategy to maximize diversity in the training and validation sets. It was evaluated using an independent dataset of 268 osteosarcoma and 554 normal knee images to ensure generalizability. By applying the U-net architecture and advanced image processing techniques such as renormalization and affine transformations, our AI model outperforms existing models, reducing missed diagnoses and enhancing patient outcomes by facilitating earlier treatment. This study highlights the importance of high-quality training data and advocates a shift towards data-centric AI development in medical imaging. These insights can be extended to other rare cancers and diseases, underscoring the potential of AI in transforming diagnostic processes in oncology. The integration of this AI model into clinical workflows could support physicians in early osteosarcoma detection, thereby improving diagnostic accuracy and patient care.
Collapse
Affiliation(s)
- Joe Hasei
- Department of Medical Information and Assistive Technology Development, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Ryuichi Nakahara
- Department of Orthopedic Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Yujiro Otsuka
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Milliman, Inc., Tokyo, Japan
- Plusman LCC, Tokyo, Japan
| | - Yusuke Nakamura
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Tamiya Hironari
- Department of Musculoskeletal Oncology Service, Osaka International Cancer Institute, Osaka, Japan
| | - Naoaki Kahara
- Department of Orthopedic Surgery, Mizushima Central Hospital, Okayama, Japan
| | - Shinji Miwa
- Department of Orthopedic Surgery, Kanazawa University Graduate School of Medical Sciences, Kanazawa, Japan
| | - Shusa Ohshika
- Department of Orthopedic Surgery, Hirosaki University Graduate School of Medicine, Aomori, Japan
| | - Shunji Nishimura
- Department of Orthopedic Surgery, Kindai University Hospital, Osaka, Japan
| | - Kunihiro Ikuta
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Shuhei Osaki
- Department of Musculoskeletal Oncology, National Cancer Center Hospital, Tokyo, Japan
| | - Aki Yoshida
- Department of Orthopedic Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Tomohiro Fujiwara
- Department of Orthopedic Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Eiji Nakata
- Department of Orthopedic Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Toshiyuki Kunisada
- Department of Orthopedic Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - Toshifumi Ozaki
- Department of Orthopedic Surgery, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| |
Collapse
|
3
|
Crim J. Bone radiographs: sometimes overlooked, often difficult to read, and still important. Skeletal Radiol 2024; 53:1687-1698. [PMID: 37914896 DOI: 10.1007/s00256-023-04498-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 10/21/2023] [Accepted: 10/22/2023] [Indexed: 11/03/2023]
Affiliation(s)
- Julia Crim
- University of Missouri at Columbia, Columbia, MO, USA.
| |
Collapse
|
4
|
Aydin Şimşek Ş, Aydin A, Say F, Cengiz T, Özcan C, Öztürk M, Okay E, Özkan K. Enhanced enchondroma detection from x-ray images using deep learning: A step towards accurate and cost-effective diagnosis. J Orthop Res 2024. [PMID: 39007705 DOI: 10.1002/jor.25938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 06/27/2024] [Accepted: 07/03/2024] [Indexed: 07/16/2024]
Abstract
This study investigates the automated detection of enchondromas, benign cartilage tumors, from x-ray images using deep learning techniques. Enchondromas pose diagnostic challenges due to their potential for malignant transformation and overlapping radiographic features with other conditions. Leveraging a data set comprising 1645 x-ray images from 1173 patients, a deep-learning model implemented with Detectron2 achieved an accuracy of 0.9899 in detecting enchondromas. The study employed rigorous validation processes and compared its findings with the existing literature, highlighting the superior performance of the deep learning approach. Results indicate the potential of machine learning in improving diagnostic accuracy and reducing healthcare costs associated with advanced imaging modalities. The study underscores the significance of early and accurate detection of enchondromas for effective patient management and suggests avenues for further research in musculoskeletal tumor detection.
Collapse
Affiliation(s)
- Şafak Aydin Şimşek
- Department of Orthopedics and Traumatology, Faculty of Medicine, Ondokuz Mayis University, Samsun, Turkey
| | - Ayhan Aydin
- Department of Computer Engineering, Karabuk University, Karabük, Turkey
| | - Ferhat Say
- Department of Orthopedics and Traumatology, Faculty of Medicine, Ondokuz Mayis University, Samsun, Turkey
| | - Tolgahan Cengiz
- Clinic of Orthopedics and Traumatology, Inebolu State Hospital, Kastamonu, Turkey
| | - Caner Özcan
- Department of Software Engineering, Karabuk University, Karabük, Turkey
| | - Mesut Öztürk
- Department of Radiology, Faculty of Medicine, Samsun University, Samsun, Turkey
| | - Erhan Okay
- Department of Orthopedics and Traumatology, Istanbul Medeniyet University Goztepe Education and Research Hospital, İstanbul, Turkey
| | - Korhan Özkan
- Department of Orthopedics and Traumatology, Acıbadem Atasehir Hospital, Istanbul, Turkey
| |
Collapse
|
5
|
Xie Z, Zhao H, Song L, Ye Q, Zhong L, Li S, Zhang R, Wang M, Chen X, Lu Z, Yang W, Zhao Y. A radiograph-based deep learning model improves radiologists' performance for classification of histological types of primary bone tumors: A multicenter study. Eur J Radiol 2024; 176:111496. [PMID: 38733705 DOI: 10.1016/j.ejrad.2024.111496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 04/03/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024]
Abstract
PURPOSE To develop a deep learning (DL) model for classifying histological types of primary bone tumors (PBTs) using radiographs and evaluate its clinical utility in assisting radiologists. METHODS This retrospective study included 878 patients with pathologically confirmed PBTs from two centers (638, 77, 80, and 83 for the training, validation, internal test, and external test sets, respectively). We classified PBTs into five categories by histological types: chondrogenic tumors, osteogenic tumors, osteoclastic giant cell-rich tumors, other mesenchymal tumors of bone, or other histological types of PBTs. A DL model combining radiographs and clinical features based on the EfficientNet-B3 was developed for five-category classification. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were calculated to evaluate model performance. The clinical utility of the model was evaluated in an observer study with four radiologists. RESULTS The combined model achieved a macro average AUC of 0.904/0.873, with an accuracy of 67.5 %/68.7 %, a macro average sensitivity of 66.9 %/57.2 %, and a macro average specificity of 92.1 %/91.6 % on the internal/external test set, respectively. Model-assisted analysis improved accuracy, interpretation time, and confidence for junior (50.6 % vs. 72.3 %, 53.07[s] vs. 18.55[s] and 3.10 vs. 3.73 on a 5-point Likert scale [P < 0.05 for each], respectively) and senior radiologists (68.7 % vs. 75.3 %, 32.50[s] vs. 21.42[s] and 4.19 vs. 4.37 [P < 0.05 for each], respectively). CONCLUSION The combined DL model effectively classified histological types of PBTs and assisted radiologists in achieving better classification results than their independent visual assessment.
Collapse
Affiliation(s)
- Zhuoyao Xie
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Huanmiao Zhao
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Baiyun District, Guangzhou, 510515, Guangdong, China.
| | - Liwen Song
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Qiang Ye
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Liming Zhong
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Baiyun District, Guangzhou, 510515, Guangdong, China.
| | - Shisi Li
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Rui Zhang
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Menghong Wang
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Xiaqing Chen
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Zixiao Lu
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| | - Wei Yang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Baiyun District, Guangzhou, 510515, Guangdong, China.
| | - Yinghua Zhao
- Department of Radiology, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics, Guangdong Province), 183 Zhongshan Da Dao Xi, Guangzhou, Guangdong, 510630, China.
| |
Collapse
|
6
|
Zhong J. Deep learning-based diagnostic models for bone lesions: is current research ready for clinical translation? Eur Radiol 2024; 34:4284-4286. [PMID: 38189983 PMCID: PMC11213795 DOI: 10.1007/s00330-023-10555-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 11/05/2023] [Accepted: 11/08/2023] [Indexed: 01/09/2024]
Affiliation(s)
- Jingyu Zhong
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China.
| |
Collapse
|
7
|
Xu C, Liu X, Bao B, Liu C, Li R, Yang T, Wu Y, Zhang Y, Tang J. Two-Stage Deep Learning Model for Diagnosis of Lumbar Spondylolisthesis Based on Lateral X-Ray Images. World Neurosurg 2024; 186:e652-e661. [PMID: 38608811 DOI: 10.1016/j.wneu.2024.04.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
BACKGROUND Diagnosing early lumbar spondylolisthesis is challenging for many doctors because of the lack of obvious symptoms. Using deep learning (DL) models to improve the accuracy of X-ray diagnoses can effectively reduce missed and misdiagnoses in clinical practice. This study aimed to use a two-stage deep learning model, the Res-SE-Net model with the YOLOv8 algorithm, to facilitate efficient and reliable diagnosis of early lumbar spondylolisthesis based on lateral X-ray image identification. METHODS A total of 2424 lumbar lateral radiographs of patients treated in the Beijing Tongren Hospital between January 2021 and September 2023 were obtained. The data were labeled and mutually identified by 3 orthopedic surgeons after reshuffling in a random order and divided into a training set, validation set, and test set in a ratio of 7:2:1. We trained 2 models for automatic detection of spondylolisthesis. YOLOv8 model was used to detect the position of lumbar spondylolisthesis, and the Res-SE-Net classification method was designed to classify the clipped area and determine whether it was lumbar spondylolisthesis. The model performance was evaluated using a test set and an external dataset from Beijing Haidian Hospital. Finally, we compared model validation results with professional clinicians' evaluation. RESULTS The model achieved promising results, with a high diagnostic accuracy of 92.3%, precision of 93.5%, and recall of 93.1% for spondylolisthesis detection on the test set, the area under the curve (AUC) value was 0.934. CONCLUSIONS Our two-stage deep learning model provides doctors with a reference basis for the better diagnosis and treatment of early lumbar spondylolisthesis.
Collapse
Affiliation(s)
- Chunyang Xu
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xingyu Liu
- School of Life Sciences, Tsinghua University, Beijing, China; Institute of Biomedical and Health Engineering (iBHE), Tsinghua Shenzhen International Graduate School, Shenzhen, China; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Beixi Bao
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chang Liu
- Department of Minimally Invasive Spine Surgery, Beijing Haidian Hospital, Peking University, China
| | - Runchao Li
- Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Tianci Yang
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yukan Wu
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yiling Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Jiaguang Tang
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
8
|
Wang Y, Yang C, Yang Q, Zhong R, Wang K, Shen H. Diagnosis of cervical lymphoma using a YOLO-v7-based model with transfer learning. Sci Rep 2024; 14:11073. [PMID: 38744888 PMCID: PMC11094110 DOI: 10.1038/s41598-024-61955-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 05/12/2024] [Indexed: 05/16/2024] Open
Abstract
To investigate the ability of an auxiliary diagnostic model based on the YOLO-v7-based model in the classification of cervical lymphadenopathy images and compare its performance against qualitative visual evaluation by experienced radiologists. Three types of lymph nodes were sampled randomly but not uniformly. The dataset was randomly divided into for training, validation, and testing. The model was constructed with PyTorch. It was trained and weighting parameters were tuned on the validation set. Diagnostic performance was compared with that of the radiologists on the testing set. The mAP of the model was 96.4% at the 50% intersection-over-union threshold. The accuracy values of it were 0.962 for benign lymph nodes, 0.982 for lymphomas, and 0.960 for metastatic lymph nodes. The precision values of it were 0.928 for benign lymph nodes, 0.975 for lymphomas, and 0.927 for metastatic lymph nodes. The accuracy values of radiologists were 0.659 for benign lymph nodes, 0.836 for lymphomas, and 0.580 for metastatic lymph nodes. The precision values of radiologists were 0.478 for benign lymph nodes, 0.329 for lymphomas, and 0.596 for metastatic lymph nodes. The model effectively classifies lymphadenopathies from ultrasound images and outperforms qualitative visual evaluation by experienced radiologists in differential diagnosis.
Collapse
Affiliation(s)
- Yuegui Wang
- Department of Ultrasound, Zhangzhou Affiliated Hospital to Fujian Medical University, No. 59 North Shengli Road, Zhangzhou, 363000, Fujian, China
| | - Caiyun Yang
- Department of Ultrasound, Zhangzhou Affiliated Hospital to Fujian Medical University, No. 59 North Shengli Road, Zhangzhou, 363000, Fujian, China
| | - Qiuting Yang
- Department of Ultrasound, Zhangzhou Affiliated Hospital to Fujian Medical University, No. 59 North Shengli Road, Zhangzhou, 363000, Fujian, China
| | - Rong Zhong
- Department of Ultrasound, Zhangzhou Affiliated Hospital to Fujian Medical University, No. 59 North Shengli Road, Zhangzhou, 363000, Fujian, China
| | - Kangjian Wang
- Department of Ultrasound, Zhangzhou Affiliated Hospital to Fujian Medical University, No. 59 North Shengli Road, Zhangzhou, 363000, Fujian, China
| | - Haolin Shen
- Department of Ultrasound, Zhangzhou Affiliated Hospital to Fujian Medical University, No. 59 North Shengli Road, Zhangzhou, 363000, Fujian, China.
| |
Collapse
|
9
|
Salehi MA, Mohammadi S, Harandi H, Zakavi SS, Jahanshahi A, Shahrabi Farahani M, Wu JS. Diagnostic Performance of Artificial Intelligence in Detection of Primary Malignant Bone Tumors: a Meta-Analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:766-777. [PMID: 38343243 PMCID: PMC11031503 DOI: 10.1007/s10278-023-00945-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/04/2023] [Accepted: 10/12/2023] [Indexed: 04/20/2024]
Abstract
We aim to conduct a meta-analysis on studies that evaluated the diagnostic performance of artificial intelligence (AI) algorithms in the detection of primary bone tumors, distinguishing them from other bone lesions, and comparing them with clinician assessment. A systematic search was conducted using a combination of keywords related to bone tumors and AI. After extracting contingency tables from all included studies, we performed a meta-analysis using random-effects model to determine the pooled sensitivity and specificity, accompanied by their respective 95% confidence intervals (CI). Quality assessment was evaluated using a modified version of Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) and Prediction Model Study Risk of Bias Assessment Tool (PROBAST). The pooled sensitivities for AI algorithms and clinicians on internal validation test sets for detecting bone neoplasms were 84% (95% CI: 79.88) and 76% (95% CI: 64.85), and pooled specificities were 86% (95% CI: 81.90) and 64% (95% CI: 55.72), respectively. At external validation, the pooled sensitivity and specificity for AI algorithms were 84% (95% CI: 75.90) and 91% (95% CI: 83.96), respectively. The same numbers for clinicians were 85% (95% CI: 73.92) and 94% (95% CI: 89.97), respectively. The sensitivity and specificity for clinicians with AI assistance were 95% (95% CI: 86.98) and 57% (95% CI: 48.66). Caution is needed when interpreting findings due to potential limitations. Further research is needed to bridge this gap in scientific understanding and promote effective implementation for medical practice advancement.
Collapse
Affiliation(s)
- Mohammad Amin Salehi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Soheil Mohammadi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran.
| | - Hamid Harandi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Seyed Sina Zakavi
- School of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jahanshahi
- School of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| | | | - Jim S Wu
- Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| |
Collapse
|
10
|
Tassoker M, Öziç MÜ, Yuce F. Performance evaluation of a deep learning model for automatic detection and localization of idiopathic osteosclerosis on dental panoramic radiographs. Sci Rep 2024; 14:4437. [PMID: 38396289 PMCID: PMC10891049 DOI: 10.1038/s41598-024-55109-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 02/20/2024] [Indexed: 02/25/2024] Open
Abstract
Idiopathic osteosclerosis (IO) are focal radiopacities of unknown etiology observed in the jaws. These radiopacities are incidentally detected on dental panoramic radiographs taken for other reasons. In this study, we investigated the performance of a deep learning model in detecting IO using a small dataset of dental panoramic radiographs with varying contrasts and features. Two radiologists collected 175 IO-diagnosed dental panoramic radiographs from the dental school database. The dataset size is limited due to the rarity of IO, with its incidence in the Turkish population reported as 2.7% in studies. To overcome this limitation, data augmentation was performed by horizontally flipping the images, resulting in an augmented dataset of 350 panoramic radiographs. The images were annotated by two radiologists and divided into approximately 70% for training (245 radiographs), 15% for validation (53 radiographs), and 15% for testing (52 radiographs). The study employing the YOLOv5 deep learning model evaluated the results using precision, recall, F1-score, mAP (mean Average Precision), and average inference time score metrics. The training and testing processes were conducted on the Google Colab Pro virtual machine. The test process's performance criteria were obtained with a precision value of 0.981, a recall value of 0.929, an F1-score value of 0.954, and an average inference time of 25.4 ms. Although radiographs diagnosed with IO have a small dataset and exhibit different contrasts and features, it has been observed that the deep learning model provides high detection speed, accuracy, and localization results. The automatic identification of IO lesions using artificial intelligence algorithms, with high success rates, can contribute to the clinical workflow of dentists by preventing unnecessary biopsy procedure.
Collapse
Affiliation(s)
- Melek Tassoker
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Necmettin Erbakan University, Bağlarbaşı Street, 42090, Meram, Konya, Turkey.
| | - Muhammet Üsame Öziç
- Faculty of Technology, Department of Biomedical Engineering, Pamukkale University, Denizli, Turkey
| | - Fatma Yuce
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Istanbul Okan University, Istanbul, Turkey
| |
Collapse
|
11
|
Shao J, Lin H, Ding L, Li B, Xu D, Sun Y, Guan T, Dai H, Liu R, Deng D, Huang B, Feng S, Diao X, Gao Z. Deep learning for differentiation of osteolytic osteosarcoma and giant cell tumor around the knee joint on radiographs: a multicenter study. Insights Imaging 2024; 15:35. [PMID: 38321327 PMCID: PMC10847082 DOI: 10.1186/s13244-024-01610-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/21/2023] [Indexed: 02/08/2024] Open
Abstract
OBJECTIVES To develop a deep learning (DL) model for differentiating between osteolytic osteosarcoma (OS) and giant cell tumor (GCT) on radiographs. METHODS Patients with osteolytic OS and GCT proven by postoperative pathology were retrospectively recruited from four centers (center A, training and internal testing; centers B, C, and D, external testing). Sixteen radiologists with different experiences in musculoskeletal imaging diagnosis were divided into three groups and participated with or without the DL model's assistance. DL model was generated using EfficientNet-B6 architecture, and the clinical model was trained using clinical variables. The performance of various models was compared using McNemar's test. RESULTS Three hundred thirty-three patients were included (mean age, 27 years ± 12 [SD]; 186 men). Compared to the clinical model, the DL model achieved a higher area under the curve (AUC) in both the internal (0.97 vs. 0.77, p = 0.008) and external test set (0.97 vs. 0.64, p < 0.001). In the total test set (including the internal and external test sets), the DL model achieved higher accuracy than the junior expert committee (93.1% vs. 72.4%; p < 0.001) and was comparable to the intermediate and senior expert committee (93.1% vs. 88.8%, p = 0.25; 87.1%, p = 0.35). With DL model assistance, the accuracy of the junior expert committee was improved from 72.4% to 91.4% (p = 0.051). CONCLUSION The DL model accurately distinguished osteolytic OS and GCT with better performance than the junior radiologists, whose own diagnostic performances were significantly improved with the aid of the model, indicating the potential for the differential diagnosis of the two bone tumors on radiographs. CRITICAL RELEVANCE STATEMENT The deep learning model can accurately distinguish osteolytic osteosarcoma and giant cell tumor on radiographs, which may help radiologists improve the diagnostic accuracy of two types of tumors. KEY POINTS • The DL model shows robust performance in distinguishing osteolytic osteosarcoma and giant cell tumor. • The diagnosis performance of the DL model is better than junior radiologists'. • The DL model shows potential for differentiating osteolytic osteosarcoma and giant cell tumor.
Collapse
Affiliation(s)
- Jingjing Shao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Hongxin Lin
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Lei Ding
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Bing Li
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Danyang Xu
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yang Sun
- Department of Radiology, Foshan Hospital of Traditional Chinese Medicine, Foshan, Guangdong, China
| | - Tianming Guan
- Department of Radiology, Hui Ya Hospital of The First Affiliated Hospital, Sun Yat-Sen University, Huizhou, Guangdong, China
| | - Haiyang Dai
- Department of Radiology, People's Hospital of Huizhou City Center, Huizhou, Guangdong, China
| | - Ruihao Liu
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Demao Deng
- Department of Radiology, The People's Hospital of Guangxi Zhuang Autonomous Region, Guanxi Academy of Medical Science, Nanning, Guangxi, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Shiting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
| | - Xianfen Diao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Medicine, Shenzhen University, Shenzhen, Guangdong, China.
| | - Zhenhua Gao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
- Department of Radiology, Hui Ya Hospital of The First Affiliated Hospital, Sun Yat-Sen University, Huizhou, Guangdong, China.
| |
Collapse
|