1
|
Debs P, Ahlawat S, Fayad LM. Bone tumors: state-of-the-art imaging. Skeletal Radiol 2024; 53:1783-1798. [PMID: 38409548 DOI: 10.1007/s00256-024-04621-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/06/2024] [Accepted: 02/11/2024] [Indexed: 02/28/2024]
Abstract
Imaging plays a central role in the management of patients with bone tumors. A number of imaging modalities are available, with different techniques having unique applications that render their use advantageous for various clinical purposes. Coupled with detailed clinical assessment, radiological imaging can assist clinicians in reaching a proper diagnosis, determining appropriate management, evaluating response to treatment, and monitoring for tumor recurrence. Although radiography is still the initial imaging test of choice for a patient presenting with a suspected bone tumor, technological innovations in the last decades have advanced the role of other imaging modalities for assessing bone tumors, including advances in computed tomography, magnetic resonance imaging, scintigraphy, and hybrid imaging techniques that combine two existing modalities, providing clinicians with diverse tools for bone tumor imaging applications. Determining the most suitable modality to use for a particular application requires familiarity with the modality in question, its advancements, and its limitations. This review highlights the various imaging techniques currently available and emphasizes the latest developments in imaging, offering a framework that can help guide the imaging of patients with bone tumors.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins Medical Institutions, 600 North Wolfe Street, Baltimore, MD, 21287, USA
| | - Shivani Ahlawat
- The Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins Medical Institutions, 600 North Wolfe Street, Baltimore, MD, 21287, USA
| | - Laura M Fayad
- The Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins Medical Institutions, 600 North Wolfe Street, Baltimore, MD, 21287, USA.
- Division of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, 601 North Caroline Street, JHOC 3014, Baltimore, MD, 21287, USA.
| |
Collapse
|
2
|
Xu D, Li B, Liu W, Wei D, Long X, Huang T, Lin H, Cao K, Zhong S, Shao J, Huang B, Diao XF, Gao Z. Deep learning-based detection of primary bone tumors around the knee joint on radiographs: a multicenter study. Quant Imaging Med Surg 2024; 14:5420-5433. [PMID: 39144039 PMCID: PMC11320541 DOI: 10.21037/qims-23-1743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/30/2024] [Indexed: 08/16/2024]
Abstract
Background Most primary bone tumors are often found in the bone around the knee joint. However, the detection of primary bone tumors on radiographs can be challenging for the inexperienced or junior radiologist. This study aimed to develop a deep learning (DL) model for the detection of primary bone tumors around the knee joint on radiographs. Methods From four tertiary referral centers, we recruited 687 patients diagnosed with bone tumors (including osteosarcoma, chondrosarcoma, giant cell tumor of bone, bone cyst, enchondroma, fibrous dysplasia, etc.; 417 males, 270 females; mean age 22.8±13.2 years) by postoperative pathology or clinical imaging/follow-up, and 1,988 participants with normal bone radiographs (1,152 males, 836 females; mean age 27.9±12.2 years). The dataset was split into a training set for model development, an internal independent and an external test set for model validation. The trained model located bone tumor lesions and then detected tumor patients. Receiver operating characteristic curves and Cohen's kappa coefficient were used for evaluating detection performance. We compared the model's detection performance with that of two junior radiologists in the internal test set using permutation tests. Results The DL model correctly localized 94.5% and 92.9% bone tumors on radiographs in the internal and external test set, respectively. An accuracy of 0.964/0.920, and an area under the receiver operating characteristic curve (AUC) of 0.981/0.990 in DL detection of bone tumor patients were for the internal and external test set, respectively. Cohen's kappa coefficient of the model in the internal test set was significantly higher than that of the two junior radiologists with 4 and 3 years of experience in musculoskeletal radiology (Model vs. Reader A, 0.927 vs. 0.777, P<0.001; Model vs. Reader B, 0.927 vs. 0.841, P=0.033). Conclusions The DL model achieved good performance in detecting primary bone tumors around the knee joint. This model had better performance than those of junior radiologists, indicating the potential for the detection of bone tumors on radiographs.
Collapse
Affiliation(s)
- Danyang Xu
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bing Li
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Weixiang Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Medical School, Shenzhen University, Shenzhen, China
| | - Dan Wei
- Department of Radiology, Huiya Hospital of The First Affiliated Hospital, Sun Yat-sen University, Huizhou, China
| | - Xiaowu Long
- Department of Radiology, Yunfu People’s Hospital, Yunfu, China
| | - Tanyu Huang
- Department of Radiology, The Second People’s Hospital of Huizhou, Huizhou, China
| | - Hongxin Lin
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Kangyang Cao
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Shaonan Zhong
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Jingjing Shao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Xian-Fen Diao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Medical School, Shenzhen University, Shenzhen, China
| | - Zhenhua Gao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Radiology, Huiya Hospital of The First Affiliated Hospital, Sun Yat-sen University, Huizhou, China
| |
Collapse
|
3
|
Hamilton A. The Future of Artificial Intelligence in Surgery. Cureus 2024; 16:e63699. [PMID: 39092371 PMCID: PMC11293880 DOI: 10.7759/cureus.63699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2024] [Indexed: 08/04/2024] Open
Abstract
Until recently, innovations in surgery were largely represented by extensions or augmentations of the surgeon's perception. This includes advancements such as the operating microscope, tumor fluorescence, intraoperative ultrasound, and minimally invasive surgical instrumentation. However, introducing artificial intelligence (AI) into the surgical disciplines represents a transformational event. Not only does AI contribute substantively to enhancing a surgeon's perception with such methodologies as three-dimensional anatomic overlays with augmented reality, AI-improved visualization for tumor resection, and AI-formatted endoscopic and robotic surgery guidance. What truly makes AI so different is that it also provides ways to augment the surgeon's cognition. By analyzing enormous databases, AI can offer new insights that can transform the operative environment in several ways. It can enable preoperative risk assessment and allow a better selection of candidates for procedures such as organ transplantation. AI can also increase the efficiency and throughput of operating rooms and staff and coordinate the utilization of critical resources such as intensive care unit beds and ventilators. Furthermore, AI is revolutionizing intraoperative guidance, improving the detection of cancers, permitting endovascular navigation, and ensuring the reduction in collateral damage to adjacent tissues during surgery (e.g., identification of parathyroid glands during thyroidectomy). AI is also transforming how we evaluate and assess surgical proficiency and trainees in postgraduate programs. It offers the potential for multiple, serial evaluations, using various scoring systems while remaining free from the biases that can plague human supervisors. The future of AI-driven surgery holds promising trends, including the globalization of surgical education, the miniaturization of instrumentation, and the increasing success of autonomous surgical robots. These advancements raise the prospect of deploying fully autonomous surgical robots in the near future into challenging environments such as the battlefield, disaster areas, and even extraplanetary exploration. In light of these transformative developments, it is clear that the future of surgery will belong to those who can most readily embrace and harness the power of AI.
Collapse
Affiliation(s)
- Allan Hamilton
- Artificial Intelligence Division for Simulation, Education, and Training, University of Arizona Health Sciences, Tucson, USA
| |
Collapse
|
4
|
Rizk PA, Gonzalez MR, Galoaa BM, Girgis AG, Van Der Linden L, Chang CY, Lozano-Calderon SA. Machine Learning-Assisted Decision Making in Orthopaedic Oncology. JBJS Rev 2024; 12:01874474-202407000-00005. [PMID: 38991098 DOI: 10.2106/jbjs.rvw.24.00057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
» Artificial intelligence is an umbrella term for computational calculations that are designed to mimic human intelligence and problem-solving capabilities, although in the future, this may become an incomplete definition. Machine learning (ML) encompasses the development of algorithms or predictive models that generate outputs without explicit instructions, assisting in clinical predictions based on large data sets. Deep learning is a subset of ML that utilizes layers of networks that use various inter-relational connections to define and generalize data.» ML algorithms can enhance radiomics techniques for improved image evaluation and diagnosis. While ML shows promise with the advent of radiomics, there are still obstacles to overcome.» Several calculators leveraging ML algorithms have been developed to predict survival in primary sarcomas and metastatic bone disease utilizing patient-specific data. While these models often report exceptionally accurate performance, it is crucial to evaluate their robustness using standardized guidelines.» While increased computing power suggests continuous improvement of ML algorithms, these advancements must be balanced against challenges such as diversifying data, addressing ethical concerns, and enhancing model interpretability.
Collapse
Affiliation(s)
- Paul A Rizk
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Marcos R Gonzalez
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Bishoy M Galoaa
- Interdisciplinary Science & Engineering Complex (ISEC), Northeastern University, Boston, Massachusetts
| | - Andrew G Girgis
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts
| | - Lotte Van Der Linden
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Connie Y Chang
- Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Santiago A Lozano-Calderon
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
5
|
Kim H, Kim K, Oh SJ, Lee S, Woo JH, Kim JH, Cha YK, Kim K, Chung MJ. AI-assisted Analysis to Facilitate Detection of Humeral Lesions on Chest Radiographs. Radiol Artif Intell 2024; 6:e230094. [PMID: 38446041 PMCID: PMC11140509 DOI: 10.1148/ryai.230094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 01/10/2024] [Accepted: 02/15/2024] [Indexed: 03/07/2024]
Abstract
Purpose To develop an artificial intelligence (AI) system for humeral tumor detection on chest radiographs (CRs) and evaluate the impact on reader performance. Materials and Methods In this retrospective study, 14 709 CRs (January 2000 to December 2021) were collected from 13 468 patients, including CT-proven normal (n = 13 116) and humeral tumor (n = 1593) cases. The data were divided into training and test groups. A novel training method called false-positive activation area reduction (FPAR) was introduced to enhance the diagnostic performance by focusing on the humeral region. The AI program and 10 radiologists were assessed using holdout test set 1, wherein the radiologists were tested twice (with and without AI test results). The performance of the AI system was evaluated using holdout test set 2, comprising 10 497 normal images. Receiver operating characteristic analyses were conducted for evaluating model performance. Results FPAR application in the AI program improved its performance compared with a conventional model based on the area under the receiver operating characteristic curve (0.87 vs 0.82, P = .04). The proposed AI system also demonstrated improved tumor localization accuracy (80% vs 57%, P < .001). In holdout test set 2, the proposed AI system exhibited a false-positive rate of 2%. AI assistance improved the radiologists' sensitivity, specificity, and accuracy by 8.9%, 1.2%, and 3.5%, respectively (P < .05 for all). Conclusion The proposed AI tool incorporating FPAR improved humeral tumor detection on CRs and reduced false-positive results in tumor visualization. It may serve as a supportive diagnostic tool to alert radiologists about humeral abnormalities. Keywords: Artificial Intelligence, Conventional Radiography, Humerus, Machine Learning, Shoulder, Tumor Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Harim Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyungsu Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Seong Je Oh
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Sungjoo Lee
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jung Han Woo
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Jong Hee Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Yoon Ki Cha
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Kyunga Kim
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| | - Myung Jin Chung
- From the Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul 06351, South Korea (H.K., J.H.W., J.H.K., Y.K.C., M.J.C.); Medical AI Research Center, Samsung Medical Center, Seoul, South Korea (Kyungsu Kim, M.J.C.); Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea (Kyungsu Kim, Kyunga Kim, M.J.C.); and Department of Health Sciences and Technology (S.J.O.) and Department of Digital Health (S.L., Kyunga Kim), Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, South Korea
| |
Collapse
|
6
|
Salehi MA, Mohammadi S, Harandi H, Zakavi SS, Jahanshahi A, Shahrabi Farahani M, Wu JS. Diagnostic Performance of Artificial Intelligence in Detection of Primary Malignant Bone Tumors: a Meta-Analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:766-777. [PMID: 38343243 PMCID: PMC11031503 DOI: 10.1007/s10278-023-00945-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/04/2023] [Accepted: 10/12/2023] [Indexed: 04/20/2024]
Abstract
We aim to conduct a meta-analysis on studies that evaluated the diagnostic performance of artificial intelligence (AI) algorithms in the detection of primary bone tumors, distinguishing them from other bone lesions, and comparing them with clinician assessment. A systematic search was conducted using a combination of keywords related to bone tumors and AI. After extracting contingency tables from all included studies, we performed a meta-analysis using random-effects model to determine the pooled sensitivity and specificity, accompanied by their respective 95% confidence intervals (CI). Quality assessment was evaluated using a modified version of Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) and Prediction Model Study Risk of Bias Assessment Tool (PROBAST). The pooled sensitivities for AI algorithms and clinicians on internal validation test sets for detecting bone neoplasms were 84% (95% CI: 79.88) and 76% (95% CI: 64.85), and pooled specificities were 86% (95% CI: 81.90) and 64% (95% CI: 55.72), respectively. At external validation, the pooled sensitivity and specificity for AI algorithms were 84% (95% CI: 75.90) and 91% (95% CI: 83.96), respectively. The same numbers for clinicians were 85% (95% CI: 73.92) and 94% (95% CI: 89.97), respectively. The sensitivity and specificity for clinicians with AI assistance were 95% (95% CI: 86.98) and 57% (95% CI: 48.66). Caution is needed when interpreting findings due to potential limitations. Further research is needed to bridge this gap in scientific understanding and promote effective implementation for medical practice advancement.
Collapse
Affiliation(s)
- Mohammad Amin Salehi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Soheil Mohammadi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran.
| | - Hamid Harandi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Seyed Sina Zakavi
- School of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jahanshahi
- School of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| | | | - Jim S Wu
- Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| |
Collapse
|
7
|
Wang Y, Lin W, Zhuang X, Wang X, He Y, Li L, Lyu G. Advances in artificial intelligence for the diagnosis and treatment of ovarian cancer (Review). Oncol Rep 2024; 51:46. [PMID: 38240090 PMCID: PMC10828921 DOI: 10.3892/or.2024.8705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 01/05/2024] [Indexed: 01/23/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a crucial technique for extracting high‑throughput information from various sources, including medical images, pathological images, and genomics, transcriptomics, proteomics and metabolomics data. AI has been widely used in the field of diagnosis, for the differentiation of benign and malignant ovarian cancer (OC), and for prognostic assessment, with favorable results. Notably, AI‑based radiomics has proven to be a non‑invasive, convenient and economical approach, making it an essential asset in a gynecological setting. The present study reviews the application of AI in the diagnosis, differentiation and prognostic assessment of OC. It is suggested that AI‑based multi‑omics studies have the potential to improve the diagnostic and prognostic predictive ability in patients with OC, thereby facilitating the realization of precision medicine.
Collapse
Affiliation(s)
- Yanli Wang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Weihong Lin
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Xiaoling Zhuang
- Department of Pathology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Xiali Wang
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, Fujian 362000, P.R. China
| | - Yifang He
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Luhong Li
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Guorong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, Fujian 362000, P.R. China
| |
Collapse
|
8
|
Tassoker M, Öziç MÜ, Yuce F. Performance evaluation of a deep learning model for automatic detection and localization of idiopathic osteosclerosis on dental panoramic radiographs. Sci Rep 2024; 14:4437. [PMID: 38396289 PMCID: PMC10891049 DOI: 10.1038/s41598-024-55109-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 02/20/2024] [Indexed: 02/25/2024] Open
Abstract
Idiopathic osteosclerosis (IO) are focal radiopacities of unknown etiology observed in the jaws. These radiopacities are incidentally detected on dental panoramic radiographs taken for other reasons. In this study, we investigated the performance of a deep learning model in detecting IO using a small dataset of dental panoramic radiographs with varying contrasts and features. Two radiologists collected 175 IO-diagnosed dental panoramic radiographs from the dental school database. The dataset size is limited due to the rarity of IO, with its incidence in the Turkish population reported as 2.7% in studies. To overcome this limitation, data augmentation was performed by horizontally flipping the images, resulting in an augmented dataset of 350 panoramic radiographs. The images were annotated by two radiologists and divided into approximately 70% for training (245 radiographs), 15% for validation (53 radiographs), and 15% for testing (52 radiographs). The study employing the YOLOv5 deep learning model evaluated the results using precision, recall, F1-score, mAP (mean Average Precision), and average inference time score metrics. The training and testing processes were conducted on the Google Colab Pro virtual machine. The test process's performance criteria were obtained with a precision value of 0.981, a recall value of 0.929, an F1-score value of 0.954, and an average inference time of 25.4 ms. Although radiographs diagnosed with IO have a small dataset and exhibit different contrasts and features, it has been observed that the deep learning model provides high detection speed, accuracy, and localization results. The automatic identification of IO lesions using artificial intelligence algorithms, with high success rates, can contribute to the clinical workflow of dentists by preventing unnecessary biopsy procedure.
Collapse
Affiliation(s)
- Melek Tassoker
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Necmettin Erbakan University, Bağlarbaşı Street, 42090, Meram, Konya, Turkey.
| | - Muhammet Üsame Öziç
- Faculty of Technology, Department of Biomedical Engineering, Pamukkale University, Denizli, Turkey
| | - Fatma Yuce
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Istanbul Okan University, Istanbul, Turkey
| |
Collapse
|
9
|
Shao J, Lin H, Ding L, Li B, Xu D, Sun Y, Guan T, Dai H, Liu R, Deng D, Huang B, Feng S, Diao X, Gao Z. Deep learning for differentiation of osteolytic osteosarcoma and giant cell tumor around the knee joint on radiographs: a multicenter study. Insights Imaging 2024; 15:35. [PMID: 38321327 PMCID: PMC10847082 DOI: 10.1186/s13244-024-01610-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 12/21/2023] [Indexed: 02/08/2024] Open
Abstract
OBJECTIVES To develop a deep learning (DL) model for differentiating between osteolytic osteosarcoma (OS) and giant cell tumor (GCT) on radiographs. METHODS Patients with osteolytic OS and GCT proven by postoperative pathology were retrospectively recruited from four centers (center A, training and internal testing; centers B, C, and D, external testing). Sixteen radiologists with different experiences in musculoskeletal imaging diagnosis were divided into three groups and participated with or without the DL model's assistance. DL model was generated using EfficientNet-B6 architecture, and the clinical model was trained using clinical variables. The performance of various models was compared using McNemar's test. RESULTS Three hundred thirty-three patients were included (mean age, 27 years ± 12 [SD]; 186 men). Compared to the clinical model, the DL model achieved a higher area under the curve (AUC) in both the internal (0.97 vs. 0.77, p = 0.008) and external test set (0.97 vs. 0.64, p < 0.001). In the total test set (including the internal and external test sets), the DL model achieved higher accuracy than the junior expert committee (93.1% vs. 72.4%; p < 0.001) and was comparable to the intermediate and senior expert committee (93.1% vs. 88.8%, p = 0.25; 87.1%, p = 0.35). With DL model assistance, the accuracy of the junior expert committee was improved from 72.4% to 91.4% (p = 0.051). CONCLUSION The DL model accurately distinguished osteolytic OS and GCT with better performance than the junior radiologists, whose own diagnostic performances were significantly improved with the aid of the model, indicating the potential for the differential diagnosis of the two bone tumors on radiographs. CRITICAL RELEVANCE STATEMENT The deep learning model can accurately distinguish osteolytic osteosarcoma and giant cell tumor on radiographs, which may help radiologists improve the diagnostic accuracy of two types of tumors. KEY POINTS • The DL model shows robust performance in distinguishing osteolytic osteosarcoma and giant cell tumor. • The diagnosis performance of the DL model is better than junior radiologists'. • The DL model shows potential for differentiating osteolytic osteosarcoma and giant cell tumor.
Collapse
Affiliation(s)
- Jingjing Shao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Hongxin Lin
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Lei Ding
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Bing Li
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Danyang Xu
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yang Sun
- Department of Radiology, Foshan Hospital of Traditional Chinese Medicine, Foshan, Guangdong, China
| | - Tianming Guan
- Department of Radiology, Hui Ya Hospital of The First Affiliated Hospital, Sun Yat-Sen University, Huizhou, Guangdong, China
| | - Haiyang Dai
- Department of Radiology, People's Hospital of Huizhou City Center, Huizhou, Guangdong, China
| | - Ruihao Liu
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Demao Deng
- Department of Radiology, The People's Hospital of Guangxi Zhuang Autonomous Region, Guanxi Academy of Medical Science, Nanning, Guangxi, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, Guangdong, China
| | - Shiting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
| | - Xianfen Diao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Medicine, Shenzhen University, Shenzhen, Guangdong, China.
| | - Zhenhua Gao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China.
- Department of Radiology, Hui Ya Hospital of The First Affiliated Hospital, Sun Yat-Sen University, Huizhou, Guangdong, China.
| |
Collapse
|
10
|
Paul P. The Rise of Artificial Intelligence: Implications in Orthopedic Surgery. J Orthop Case Rep 2024; 14:1-4. [PMID: 38420225 PMCID: PMC10898706 DOI: 10.13107/jocr.2024.v14.i02.4194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Artificial intelligence (AI) is slowly making its way into all domains and medicine is no exception. AI is already proving to be a promising tool in the health-care field. With respect to orthopedics, AI is already under use in diagnostics as in fracture and tumor detection, predictive algorithms to predict the mortality risk and duration of hospital stay or complications such as implant loosening and in real-time assessment of post-operative rehabilitation. AI could also be of use in surgical training, utilizing technologies such as virtual reality and augmented reality. However, clinicians should also be aware of the limitations of AI as validation is necessary to avoid errors. This article aims to provide a description of AI and its subfields, its current applications in orthopedics, the limitations, and its future prospects.
Collapse
Affiliation(s)
- Prannoy Paul
- Institute of Advanced Orthopedics, M.O.S.C Medical College Hospital, Kolenchery, Ernakulam, Kerala, India
| |
Collapse
|
11
|
Coghlan S, Gyngell C, Vears DF. Ethics of artificial intelligence in prenatal and pediatric genomic medicine. J Community Genet 2024; 15:13-24. [PMID: 37796364 PMCID: PMC10857992 DOI: 10.1007/s12687-023-00678-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
This paper examines the ethics of introducing emerging forms of artificial intelligence (AI) into prenatal and pediatric genomic medicine. Application of genomic AI to these early life settings has not received much attention in the ethics literature. We focus on three contexts: (1) prenatal genomic sequencing for possible fetal abnormalities, (2) rapid genomic sequencing for critically ill children, and (3) reanalysis of genomic data obtained from children for diagnostic purposes. The paper identifies and discusses various ethical issues in the possible application of genomic AI in these settings, especially as they relate to concepts of beneficence, nonmaleficence, respect for autonomy, justice, transparency, accountability, privacy, and trust. The examination will inform the ethically sound introduction of genomic AI in early human life.
Collapse
Affiliation(s)
- Simon Coghlan
- School of Computing and Information Systems (CIS), Centre for AI and Digital Ethics (CAIDE), The University of Melbourne, Grattan St, Melbourne, Victoria, 3010, Australia.
- Australian Research Council Centre of Excellence for Automated Decision Making and Society (ADM+S), Melbourne, Victoria, Australia.
| | - Christopher Gyngell
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
| | - Danya F Vears
- Biomedical Ethics Research Group, Murdoch Children's Research Institute, The Royal Children's Hospital, 50 Flemington Rd, Parkville, Victoria, 3052, Australia
- University of Melbourne, Parkville, Victoria, 3052, Australia
- Centre for Biomedical Ethics and Law, KU Leuven, Kapucijnenvoer 35, 3000, Leuven, Belgium
| |
Collapse
|
12
|
Sampath K, Rajagopal S, Chintanpalli A. A comparative analysis of CNN-based deep learning architectures for early diagnosis of bone cancer using CT images. Sci Rep 2024; 14:2144. [PMID: 38273131 PMCID: PMC10811327 DOI: 10.1038/s41598-024-52719-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 01/23/2024] [Indexed: 01/27/2024] Open
Abstract
Bone cancer is a rare in which cells in the bone grow out of control, resulting in destroying the normal bone tissue. A benign type of bone cancer is harmless and does not spread to other body parts, whereas a malignant type can spread to other body parts and might be harmful. According to Cancer Research UK (2021), the survival rate for patients with bone cancer is 40% and early detection can increase the chances of survival by providing treatment at the initial stages. Prior detection of these lumps or masses can reduce the risk of death and treat bone cancer early. The goal of this current study is to utilize image processing techniques and deep learning-based Convolution neural network (CNN) to classify normal and cancerous bone images. Medical image processing techniques, like pre-processing (e.g., median filter), K-means clustering segmentation, and, canny edge detection were used to detect the cancer region in Computer Tomography (CT) images for parosteal osteosarcoma, enchondroma and osteochondroma types of bone cancer. After segmentation, the normal and cancerous affected images were classified using various existing CNN-based models. The results revealed that AlexNet model showed a better performance with a training accuracy of 98%, validation accuracy of 98%, and testing accuracy of 100%.
Collapse
Affiliation(s)
- Kanimozhi Sampath
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India
| | - Sivakumar Rajagopal
- Department of Sensor and Biomedical Technology, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India.
| | - Ananthakrishna Chintanpalli
- Department of Communication Engineering, School of Electronics Engineering, Vellore Institute of Technology, Vellore, 632014, India
| |
Collapse
|
13
|
Beyaz S, Betul Yayli S, Kılıç E, Kılıç K. Comparison of artificial intelligence algorithm for the diagnosis of hip fracture on plain radiography with decision-making physicians: a validation study. ACTA ORTHOPAEDICA ET TRAUMATOLOGICA TURCICA 2024; 58:4-9. [PMID: 38525504 PMCID: PMC11059475 DOI: 10.5152/j.aott.2024.23065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 12/11/2023] [Indexed: 03/26/2024]
Abstract
OBJECTIVE This study aimed to compare an algorithm developed for diagnosing hip fractures on plain radiographs with the physicians involved in diagnosing hip fractures. METHODS Radiographs labeled as fractured (n=182) and non-fractured (n=542) by an expert on proximal femur fractures were included in the study. General practitioners in the emergency department (n=3), emergency medicine (n=3), radiologists (n=3), orthopedic residents (n=3), and orthopedic surgeons (n=3) were included in the study as the labelers, who labeled the presence of fractures on the right and left sides of the proximal femoral region on each anteroposterior (AP) plain pelvis radiograph as fractured or non-fractured. In addition, all the radiographs were evaluated using an artificial intelligence (AI) algorithm consisting of 3 AI models and a majority voting technique. Each AI model evaluated each graph separately, and majority voting determined the final decision as the majority of the outputs of the 3 AI models. The results of the AI algorithm and labelling physicians included in the study were compared with the reference evaluation. RESULTS Based on F-1 scores, here are the average scores of the group: majority voting (0.942) > orthopedic surgeon (0.938) > AI models (0.917) > orthopedic resident (0.858) > emergency medicine (0.758) > general practitioner (0.689) > radiologist (0.677). CONCLUSION The AI algorithm developed in our previous study may help recognize fractures in AP pelvis in plain radiography in the emergency department for non-orthopedist physicians. LEVEL OF EVIDENCE Level IV, Diagnostic Study.
Collapse
Affiliation(s)
- Salih Beyaz
- Department of Orthopedics and Traumatology, Başkent University, Adana Turgut Noyan Research and Training Centre, Adana, Turkey
| | - Sahika Betul Yayli
- Turkcell Technology, Artificial Intelligence & Digital Analytic Solutions, İstanbul, Turkey
| | - Ersin Kılıç
- Turkcell Technology, Artificial Intelligence & Digital Analytic Solutions, İstanbul, Turkey
| | - Kutay Kılıç
- Turkcell Technology, Artificial Intelligence & Digital Analytic Solutions, İstanbul, Turkey
| |
Collapse
|
14
|
Ong W, Liu RW, Makmur A, Low XZ, Sng WJ, Tan JH, Kumar N, Hallinan JTPD. Artificial Intelligence Applications for Osteoporosis Classification Using Computed Tomography. Bioengineering (Basel) 2023; 10:1364. [PMID: 38135954 PMCID: PMC10741220 DOI: 10.3390/bioengineering10121364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/21/2023] [Accepted: 11/23/2023] [Indexed: 12/24/2023] Open
Abstract
Osteoporosis, marked by low bone mineral density (BMD) and a high fracture risk, is a major health issue. Recent progress in medical imaging, especially CT scans, offers new ways of diagnosing and assessing osteoporosis. This review examines the use of AI analysis of CT scans to stratify BMD and diagnose osteoporosis. By summarizing the relevant studies, we aimed to assess the effectiveness, constraints, and potential impact of AI-based osteoporosis classification (severity) via CT. A systematic search of electronic databases (PubMed, MEDLINE, Web of Science, ClinicalTrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 39 articles were retrieved from the databases, and the key findings were compiled and summarized, including the regions analyzed, the type of CT imaging, and their efficacy in predicting BMD compared with conventional DXA studies. Important considerations and limitations are also discussed. The overall reported accuracy, sensitivity, and specificity of AI in classifying osteoporosis using CT images ranged from 61.8% to 99.4%, 41.0% to 100.0%, and 31.0% to 100.0% respectively, with areas under the curve (AUCs) ranging from 0.582 to 0.994. While additional research is necessary to validate the clinical efficacy and reproducibility of these AI tools before incorporating them into routine clinical practice, these studies demonstrate the promising potential of using CT to opportunistically predict and classify osteoporosis without the need for DEXA.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
| | - Ren Wei Liu
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Weizhong Jonathan Sng
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore (A.M.); (X.Z.L.); (W.J.S.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
15
|
Anttila TT, Aspinen S, Pierides G, Haapamäki V, Laitinen MK, Ryhänen J. Enchondroma Detection from Hand Radiographs with an Interactive Deep Learning Segmentation Tool-A Feasibility Study. J Clin Med 2023; 12:7129. [PMID: 38002741 PMCID: PMC10672653 DOI: 10.3390/jcm12227129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 11/01/2023] [Accepted: 11/13/2023] [Indexed: 11/26/2023] Open
Abstract
Enchondromas are common benign bone tumors, usually presenting in the hand. They can cause symptoms such as swelling and pain but often go un-noticed. If the tumor expands, it can diminish the bone cortices and predispose the bone to fracture. Diagnosis is based on clinical investigation and radiographic imaging. Despite their typical appearance on radiographs, they can primarily be misdiagnosed or go totally unrecognized in the acute trauma setting. Earlier applications of deep learning models to image classification and pattern recognition suggest that this technique may also be utilized in detecting enchondroma in hand radiographs. We trained a deep learning model with 414 enchondroma radiographs to detect enchondroma from hand radiographs. A separate test set of 131 radiographs (47% with an enchondroma) was used to assess the performance of the trained deep learning model. Enchondroma annotation by three clinical experts served as our ground truth in assessing the deep learning model's performance. Our deep learning model detected 56 enchondromas from the 62 enchondroma radiographs. The area under receiver operator curve was 0.95. The F1 score for area statistical overlapping was 69.5%. Our deep learning model may be a useful tool for radiograph screening and raising suspicion of enchondroma.
Collapse
Affiliation(s)
- Turkka Tapio Anttila
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| | - Samuli Aspinen
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| | - Georgios Pierides
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| | - Ville Haapamäki
- Department of Radiology, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| | - Minna Katariina Laitinen
- Musculoskeletal and Plastic Surgery, Department of Orthopedic Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| | - Jorma Ryhänen
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, 00029 Helsinki, Finland
| |
Collapse
|
16
|
Joo MW, Ko T, Kim MS, Lee YS, Shin SH, Chung YG, Lee HK. Development and Validation of a Convolutional Neural Network Model to Predict a Pathologic Fracture in the Proximal Femur Using Abdomen and Pelvis CT Images of Patients With Advanced Cancer. Clin Orthop Relat Res 2023; 481:2247-2256. [PMID: 37615504 PMCID: PMC10566917 DOI: 10.1097/corr.0000000000002771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 06/14/2023] [Indexed: 08/25/2023]
Abstract
BACKGROUND Improvement in survival in patients with advanced cancer is accompanied by an increased probability of bone metastasis and related pathologic fractures (especially in the proximal femur). The few systems proposed and used to diagnose impending fractures owing to metastasis and to ultimately prevent future fractures have practical limitations; thus, novel screening tools are essential. A CT scan of the abdomen and pelvis is a standard modality for staging and follow-up in patients with cancer, and radiologic assessments of the proximal femur are possible with CT-based digitally reconstructed radiographs. Deep-learning models, such as convolutional neural networks (CNNs), may be able to predict pathologic fractures from digitally reconstructed radiographs, but to our knowledge, they have not been tested for this application. QUESTIONS/PURPOSES (1) How accurate is a CNN model for predicting a pathologic fracture in a proximal femur with metastasis using digitally reconstructed radiographs of the abdomen and pelvis CT images in patients with advanced cancer? (2) Do CNN models perform better than clinicians with varying backgrounds and experience levels in predicting a pathologic fracture on abdomen and pelvis CT images without any knowledge of the patients' histories, except for metastasis in the proximal femur? METHODS A total of 392 patients received radiation treatment of the proximal femur at three hospitals from January 2011 to December 2021. The patients had 2945 CT scans of the abdomen and pelvis for systemic evaluation and follow-up in relation to their primary cancer. In 33% of the CT scans (974), it was impossible to identify whether a pathologic fracture developed within 3 months after each CT image was acquired, and these were excluded. Finally, 1971 cases with a mean age of 59 ± 12 years were included in this study. Pathologic fractures developed within 3 months after CT in 3% (60 of 1971) of cases. A total of 47% (936 of 1971) were women. Sixty cases had an established pathologic fracture within 3 months after each CT scan, and another group of 1911 cases had no established pathologic fracture within 3 months after CT scan. The mean age of the cases in the former and latter groups was 64 ± 11 years and 59 ± 12 years, respectively, and 32% (19 of 60) and 53% (1016 of 1911) of cases, respectively, were female. Digitally reconstructed radiographs were generated with perspective projections of three-dimensional CT volumes onto two-dimensional planes. Then, 1557 images from one hospital were used for a training set. To verify that the deep-learning models could consistently operate even in hospitals with a different medical environment, 414 images from other hospitals were used for external validation. The number of images in the groups with and without a pathologic fracture within 3 months after each CT scan increased from 1911 to 22,932 and from 60 to 720, respectively, using data augmentation methods that are known to be an effective way to boost the performance of deep-learning models. Three CNNs (VGG16, ResNet50, and DenseNet121) were fine-tuned using digitally reconstructed radiographs. For performance measures, the area under the receiver operating characteristic curve, accuracy, sensitivity, specificity, precision, and F1 score were determined. The area under the receiver operating characteristic curve was used to evaluate three CNN models mainly, and the optimal accuracy, sensitivity, and specificity were calculated using the Youden J statistic. Accuracy refers to the proportion of fractures in the groups with and without a pathologic fracture within 3 months after each CT scan that were accurately predicted by the CNN model. Sensitivity and specificity represent the proportion of accurately predicted fractures among those with and without a pathologic fracture within 3 months after each CT scan, respectively. Precision is a measure of how few false-positives the model produces. The F1 score is a harmonic mean of sensitivity and precision, which have a tradeoff relationship. Gradient-weighted class activation mapping images were created to check whether the CNN model correctly focused on potential pathologic fracture regions. The CNN model with the best performance was compared with the performance of clinicians. RESULTS DenseNet121 showed the best performance in identifying pathologic fractures; the area under the receiver operating characteristic curve for DenseNet121 was larger than those for VGG16 (0.77 ± 0.07 [95% CI 0.75 to 0.79] versus 0.71 ± 0.08 [95% CI 0.69 to 0.73]; p = 0.001) and ResNet50 (0.77 ± 0.07 [95% CI 0.75 to 0.79] versus 0.72 ± 0.09 [95% CI 0.69 to 0.74]; p = 0.001). Specifically, DenseNet121 scored the highest in sensitivity (0.22 ± 0.07 [95% CI 0.20 to 0.24]), precision (0.72 ± 0.19 [95% CI 0.67 to 0.77]), and F1 score (0.34 ± 0.10 [95% CI 0.31 to 0.37]), and it focused accurately on the region with the expected pathologic fracture. Further, DenseNet121 was less likely than clinicians to mispredict cases in which there was no pathologic fracture than cases in which there was a fracture; the performance of DenseNet121 was better than clinician performance in terms of specificity (0.98 ± 0.01 [95% CI 0.98 to 0.99] versus 0.86 ± 0.09 [95% CI 0.81 to 0.91]; p = 0.01), precision (0.72 ± 0.19 [95% CI 0.67 to 0.77] versus 0.11 ± 0.10 [95% CI 0.05 to 0.17]; p = 0.0001), and F1 score (0.34 ± 0.10 [95% CI 0.31 to 0.37] versus 0.17 ± 0.15 [95% CI 0.08 to 0.26]; p = 0.0001). CONCLUSION CNN models may be able to accurately predict impending pathologic fractures from digitally reconstructed radiographs of the abdomen and pelvis CT images that clinicians may not anticipate; this can assist medical, radiation, and orthopaedic oncologists clinically. To achieve better performance, ensemble-learning models using knowledge of the patients' histories should be developed and validated. The code for our model is publicly available online at https://github.com/taehoonko/CNN_path_fx_prediction . LEVEL OF EVIDENCE Level III, diagnostic study.
Collapse
Affiliation(s)
- Min Wook Joo
- Department of Orthopedic Surgery, St. Vincent’s Hospital, College of Medicine, the Catholic University of Korea, Seoul, Republic of Korea
| | - Taehoon Ko
- Department of Medical Informatics, College of Medicine, the Catholic University of Korea, Seoul, Republic of Korea
| | - Min Seob Kim
- The City Hall Station St. Mary’s Psychiatric Clinic, Seoul, Republic of Korea
| | - Yong-Suk Lee
- Department of Orthopedic Surgery, Incheon St. Mary’s Hospital, College of Medicine, the Catholic University of Korea, Seoul, Republic of Korea
| | - Seung Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary’s Hospital, College of Medicine, the Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary’s Hospital, College of Medicine, the Catholic University of Korea, Seoul, Republic of Korea
| | - Hong Kwon Lee
- Department of Orthopedic Surgery, St. Vincent’s Hospital, College of Medicine, the Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
17
|
Li Y, Dong B, Yuan P. The diagnostic value of machine learning for the classification of malignant bone tumor: a systematic evaluation and meta-analysis. Front Oncol 2023; 13:1207175. [PMID: 37746301 PMCID: PMC10513372 DOI: 10.3389/fonc.2023.1207175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/23/2023] [Indexed: 09/26/2023] Open
Abstract
Background Malignant bone tumors are a type of cancer with varying malignancy and prognosis. Accurate diagnosis and classification are crucial for treatment and prognosis assessment. Machine learning has been introduced for early differential diagnosis of malignant bone tumors, but its performance is controversial. This systematic review and meta-analysis aims to explore the diagnostic value of machine learning for malignant bone tumors. Methods PubMed, Embase, Cochrane Library, and Web of Science were searched for literature on machine learning in the differential diagnosis of malignant bone tumors up to October 31, 2022. The risk of bias assessment was conducted using QUADAS-2. A bivariate mixed-effects model was used for meta-analysis, with subgroup analyses by machine learning methods and modeling approaches. Results The inclusion comprised 31 publications with 382,371 patients, including 141,315 with malignant bone tumors. Meta-analysis results showed machine learning sensitivity and specificity of 0.87 [95% CI: 0.81,0.91] and 0.91 [95% CI: 0.86,0.94] in the training set, and 0.83 [95% CI: 0.74,0.89] and 0.87 [95% CI: 0.79,0.92] in the validation set. Subgroup analysis revealed MRI-based radiomics was the most common approach, with sensitivity and specificity of 0.85 [95% CI: 0.74,0.91] and 0.87 [95% CI: 0.81,0.91] in the training set, and 0.79 [95% CI: 0.70,0.86] and 0.79 [95% CI: 0.70,0.86] in the validation set. Convolutional neural networks were the most common model type, with sensitivity and specificity of 0.86 [95% CI: 0.72,0.94] and 0.92 [95% CI: 0.82,0.97] in the training set, and 0.87 [95% CI: 0.51,0.98] and 0.87 [95% CI: 0.69,0.96] in the validation set. Conclusion Machine learning is mainly applied in radiomics for diagnosing malignant bone tumors, showing desirable diagnostic performance. Machine learning can be an early adjunctive diagnostic method but requires further research and validation to determine its practical efficiency and clinical application prospects. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023387057.
Collapse
Affiliation(s)
| | - Bo Dong
- Department of Orthopedics, Xi’an Honghui Hospital, Xi’an Jiaotong University, Xi’an Shaanxi, China
| | | |
Collapse
|
18
|
Lisacek-Kiosoglous AB, Powling AS, Fontalis A, Gabr A, Mazomenos E, Haddad FS. Artificial intelligence in orthopaedic surgery. Bone Joint Res 2023; 12:447-454. [PMID: 37423607 DOI: 10.1302/2046-3758.127.bjr-2023-0111.r1] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/11/2023] Open
Abstract
The use of artificial intelligence (AI) is rapidly growing across many domains, of which the medical field is no exception. AI is an umbrella term defining the practical application of algorithms to generate useful output, without the need of human cognition. Owing to the expanding volume of patient information collected, known as 'big data', AI is showing promise as a useful tool in healthcare research and across all aspects of patient care pathways. Practical applications in orthopaedic surgery include: diagnostics, such as fracture recognition and tumour detection; predictive models of clinical and patient-reported outcome measures, such as calculating mortality rates and length of hospital stay; and real-time rehabilitation monitoring and surgical training. However, clinicians should remain cognizant of AI's limitations, as the development of robust reporting and validation frameworks is of paramount importance to prevent avoidable errors and biases. The aim of this review article is to provide a comprehensive understanding of AI and its subfields, as well as to delineate its existing clinical applications in trauma and orthopaedic surgery. Furthermore, this narrative review expands upon the limitations of AI and future direction.
Collapse
Affiliation(s)
- Anthony B Lisacek-Kiosoglous
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
| | - Amber S Powling
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
- Barts and The London School of Medicine and Dentistry, School of Medicine London, London, UK
| | - Andreas Fontalis
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
- Division of Surgery and Interventional Science, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Ayman Gabr
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
| | - Evangelos Mazomenos
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Fares S Haddad
- Department of Trauma and Orthopaedic Surgery, University College London Hospitals NHS Foundation Trust, London, UK
- Division of Surgery and Interventional Science, University College London, London, UK
| |
Collapse
|
19
|
Ong W, Zhu L, Tan YL, Teo EC, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A, Hallinan JTPD. Application of Machine Learning for Differentiating Bone Malignancy on Imaging: A Systematic Review. Cancers (Basel) 2023; 15:cancers15061837. [PMID: 36980722 PMCID: PMC10047175 DOI: 10.3390/cancers15061837] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/07/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
An accurate diagnosis of bone tumours on imaging is crucial for appropriate and successful treatment. The advent of Artificial intelligence (AI) and machine learning methods to characterize and assess bone tumours on various imaging modalities may assist in the diagnostic workflow. The purpose of this review article is to summarise the most recent evidence for AI techniques using imaging for differentiating benign from malignant lesions, the characterization of various malignant bone lesions, and their potential clinical application. A systematic search through electronic databases (PubMed, MEDLINE, Web of Science, and clinicaltrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 34 articles were retrieved from the databases and the key findings were compiled and summarised. A total of 34 articles reported the use of AI techniques to distinguish between benign vs. malignant bone lesions, of which 12 (35.3%) focused on radiographs, 12 (35.3%) on MRI, 5 (14.7%) on CT and 5 (14.7%) on PET/CT. The overall reported accuracy, sensitivity, and specificity of AI in distinguishing between benign vs. malignant bone lesions ranges from 0.44–0.99, 0.63–1.00, and 0.73–0.96, respectively, with AUCs of 0.73–0.96. In conclusion, the use of AI to discriminate bone lesions on imaging has achieved a relatively good performance in various imaging modalities, with high sensitivity, specificity, and accuracy for distinguishing between benign vs. malignant lesions in several cohort studies. However, further research is necessary to test the clinical performance of these algorithms before they can be facilitated and integrated into routine clinical practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Correspondence: ; Tel.: +65-67725207
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, 5 Lower Kent Ridge Road, Singapore 119074, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore 117417, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
20
|
DTBV: A Deep Transfer-Based Bone Cancer Diagnosis System Using VGG16 Feature Extraction. Diagnostics (Basel) 2023; 13:diagnostics13040757. [PMID: 36832245 PMCID: PMC9955441 DOI: 10.3390/diagnostics13040757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/23/2023] [Accepted: 01/25/2023] [Indexed: 02/19/2023] Open
Abstract
Among the many different types of cancer, bone cancer is the most lethal and least prevalent. More cases are reported each year. Early diagnosis of bone cancer is crucial since it helps limit the spread of malignant cells and reduce mortality. The manual method of detection of bone cancer is cumbersome and requires specialized knowledge. A deep transfer-based bone cancer diagnosis (DTBV) system using VGG16 feature extraction is proposed to address these issues. The proposed DTBV system uses a transfer learning (TL) approach in which a pre-trained convolutional neural network (CNN) model is used to extract features from the pre-processed input image and a support vector machine (SVM) model is used to train using these features to distinguish between cancerous and healthy bone. The CNN is applied to the image datasets as it provides better image recognition with high accuracy when the layers in neural network feature extraction increase. In the proposed DTBV system, the VGG16 model extracts the features from the input X-ray image. A mutual information statistic that measures the dependency between the different features is then used to select the best features. This is the first time this method has been used for detecting bone cancer. Once selected features are selected, they are fed into the SVM classifier. The SVM model classifies the given testing dataset into malignant and benign categories. A comprehensive performance evaluation has demonstrated that the proposed DTBV system is highly efficient in detecting bone cancer, with an accuracy of 93.9%, which is more accurate than other existing systems.
Collapse
|
21
|
Manganelli Conforti P, D’Acunto M, Russo P. Deep Learning for Chondrogenic Tumor Classification through Wavelet Transform of Raman Spectra. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22197492. [PMID: 36236597 PMCID: PMC9571786 DOI: 10.3390/s22197492] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 09/16/2022] [Accepted: 09/23/2022] [Indexed: 05/22/2023]
Abstract
The grading of cancer tissues is still one of the main challenges for pathologists. The development of enhanced analysis strategies hence becomes crucial to accurately identify and further deal with each individual case. Raman spectroscopy (RS) is a promising tool for the classification of tumor tissues as it allows us to obtain the biochemical maps of the tissues under analysis and to observe their evolution in terms of biomolecules, proteins, lipid structures, DNA, vitamins, and so on. However, its potential could be further improved by providing a classification system which would be able to recognize the sample tumor category by taking as input the raw Raman spectroscopy signal; this could provide more reliable responses in shorter time scales and could reduce or eliminate false-positive or -negative diagnoses. Deep Learning techniques have become ubiquitous in recent years, with models able to perform classification with high accuracy in most diverse fields of research, e.g., natural language processing, computer vision, medical imaging. However, deep models often rely on huge labeled datasets to produce reasonable accuracy, otherwise occurring in overfitting issues when the training data is insufficient. In this paper, we propose a chondrogenic tumor CLAssification through wavelet transform of RAman spectra (CLARA), which is able to classify with high accuracy Raman spectra obtained from bone tissues. CLARA recognizes and grades the tumors in the evaluated dataset with 97% accuracy by exploiting a classification pipeline consisting of the division of the original task in two binary classification steps, where the first is performed on the original RS signals while the latter is accomplished through the use of a hybrid temporal-frequency 2D transform.
Collapse
Affiliation(s)
| | - Mario D’Acunto
- CNR-IBF, Istituto di Biofisica, Via Moruzzi 1, 56124 Pisa, Italy
| | - Paolo Russo
- DIAG Department, Sapienza University of Rome, Via Ariosto 25, 00185 Roma, Italy
- Correspondence:
| |
Collapse
|