1
|
Liu W, Wang J, Lei Y, Liu P, Han Z, Wang S, Liu B. Deep Learning for Discrimination of Early Spinal Tuberculosis from Acute Osteoporotic Vertebral Fracture on CT. Infect Drug Resist 2025; 18:31-42. [PMID: 39776757 PMCID: PMC11706012 DOI: 10.2147/idr.s482584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 12/19/2024] [Indexed: 01/11/2025] Open
Abstract
Background Early differentiation between spinal tuberculosis (STB) and acute osteoporotic vertebral compression fracture (OVCF) is crucial for determining the appropriate clinical management and treatment pathway, thereby significantly impacting patient outcomes. Objective To evaluate the efficacy of deep learning (DL) models using reconstructed sagittal CT images in the differentiation of early STB from acute OVCF, with the aim of enhancing diagnostic precision, reducing reliance on MRI and biopsies, and minimizing the risks of misdiagnosis. Methods Data were collected from 373 patients, with 302 patients recruited from a university-affiliated hospital serving as the training and internal validation sets, and an additional 71 patients from another university-affiliated hospital serving as the external validation set. MVITV2, Efficient-Net-B5, ResNet101, and ResNet50 were used as the backbone networks for DL model development, training, and validation. Model evaluation was based on accuracy, precision, sensitivity, F1 score, and area under the curve (AUC). The performance of the DL models was compared with the diagnostic accuracy of two spine surgeons who performed a blinded review. Results The MVITV2 model outperformed other architectures in the internal validation set, achieving accuracy of 98.98%, precision of 100%, sensitivity of 97.97%, F1 score of 98.98%, and AUC of 0.997. The performance of the DL models notably exceeded that of the spine surgeons, who achieved accuracy rates of 77.38% and 93.56%. The external validation confirmed the models' robustness and generalizability. Conclusion The DL models significantly improved the differentiation between STB and OVCF, surpassing experienced spine surgeons in diagnostic accuracy. These models offer a promising alternative to traditional imaging and invasive procedures, potentially promoting early and accurate diagnosis, reducing healthcare costs, and improving patient outcomes. The findings underscore the potential of artificial intelligence for revolutionizing spinal disease diagnostics, and have substantial clinical implications.
Collapse
Affiliation(s)
- Wenjun Liu
- Department of Orthopedics, First Affiliated Hospital, Chongqing Medical University, Chongqing, People’s Republic of China
| | - Jin Wang
- College of Medical Informatics, Chongqing Medical University, Chongqing, People’s Republic of China
| | - Yiting Lei
- Department of Orthopedics, First Affiliated Hospital, Chongqing Medical University, Chongqing, People’s Republic of China
| | - Peng Liu
- Department of Orthopedics, Daping Hospital, Army Medical University, Chongqing, People’s Republic of China
| | - Zhenghan Han
- Department of Orthopedics, First Affiliated Hospital, Chongqing Medical University, Chongqing, People’s Republic of China
| | - Shichu Wang
- Department of Orthopedics, First Affiliated Hospital, Chongqing Medical University, Chongqing, People’s Republic of China
| | - Bo Liu
- Department of Orthopedics, First Affiliated Hospital, Chongqing Medical University, Chongqing, People’s Republic of China
| |
Collapse
|
2
|
Ong W, Lee A, Tan WC, Fong KTD, Lai DD, Tan YL, Low XZ, Ge S, Makmur A, Ong SJ, Ting YH, Tan JH, Kumar N, Hallinan JTPD. Oncologic Applications of Artificial Intelligence and Deep Learning Methods in CT Spine Imaging-A Systematic Review. Cancers (Basel) 2024; 16:2988. [PMID: 39272846 PMCID: PMC11394591 DOI: 10.3390/cancers16172988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 08/14/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
In spinal oncology, integrating deep learning with computed tomography (CT) imaging has shown promise in enhancing diagnostic accuracy, treatment planning, and patient outcomes. This systematic review synthesizes evidence on artificial intelligence (AI) applications in CT imaging for spinal tumors. A PRISMA-guided search identified 33 studies: 12 (36.4%) focused on detecting spinal malignancies, 11 (33.3%) on classification, 6 (18.2%) on prognostication, 3 (9.1%) on treatment planning, and 1 (3.0%) on both detection and classification. Of the classification studies, 7 (21.2%) used machine learning to distinguish between benign and malignant lesions, 3 (9.1%) evaluated tumor stage or grade, and 2 (6.1%) employed radiomics for biomarker classification. Prognostic studies included three (9.1%) that predicted complications such as pathological fractures and three (9.1%) that predicted treatment outcomes. AI's potential for improving workflow efficiency, aiding decision-making, and reducing complications is discussed, along with its limitations in generalizability, interpretability, and clinical integration. Future directions for AI in spinal oncology are also explored. In conclusion, while AI technologies in CT imaging are promising, further research is necessary to validate their clinical effectiveness and optimize their integration into routine practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Aric Lee
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Wei Chuan Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Kuan Ting Dominic Fong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Daoyong David Lai
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shuliang Ge
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shao Jin Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Yong Han Ting
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
3
|
Zhang YF, Zhou C, Guo S, Wang C, Yang J, Yang ZJ, Wang R, Zhang X, Zhou FH. Deep learning algorithm-based multimodal MRI radiomics and pathomics data improve prediction of bone metastases in primary prostate cancer. J Cancer Res Clin Oncol 2024; 150:78. [PMID: 38316655 PMCID: PMC10844393 DOI: 10.1007/s00432-023-05574-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/04/2023] [Indexed: 02/07/2024]
Abstract
PURPOSE Bone metastasis is a significant contributor to morbidity and mortality in advanced prostate cancer, and early diagnosis is challenging due to its insidious onset. The use of machine learning to obtain prognostic information from pathological images has been highlighted. However, there is a limited understanding of the potential of early prediction of bone metastasis through the feature combination method from various sources. This study presents a method of integrating multimodal data to enhance the feasibility of early diagnosis of bone metastasis in prostate cancer. METHODS AND MATERIALS Overall, 211 patients diagnosed with prostate cancer (PCa) at Gansu Provincial Hospital between January 2017 and February 2023 were included in this study. The patients were randomized (8:2) into a training group (n = 169) and a validation group (n = 42). The region of interest (ROI) were segmented from the three magnetic resonance imaging (MRI) sequences (T2WI, DWI, and ADC), and pathological features were extracted from tissue sections (hematoxylin and eosin [H&E] staining, 10 × 20). A deep learning (DL) model using ResNet 50 was employed to extract deep transfer learning (DTL) features. The least absolute shrinkage and selection operator (LASSO) regression method was utilized for feature selection, feature construction, and reducing feature dimensions. Different machine learning classifiers were used to build predictive models. The performance of the models was evaluated using receiver operating characteristic curves. The net clinical benefit was assessed using decision curve analysis (DCA). The goodness of fit was evaluated using calibration curves. A joint model nomogram was eventually developed by combining clinically independent risk factors. RESULTS The best prediction models based on DTL and pathomics features showed area under the curve (AUC) values of 0.89 (95% confidence interval [CI], 0.799-0.989) and 0.85 (95% CI, 0.714-0.989), respectively. The AUC for the best prediction model based on radiomics features and combining radiomics features, DTL features, and pathomics features were 0.86 (95% CI, 0.735-0.979) and 0.93 (95% CI, 0.854-1.000), respectively. Based on DCA and calibration curves, the model demonstrated good net clinical benefit and fit. CONCLUSION Multimodal radiomics and pathomics serve as valuable predictors of the risk of bone metastases in patients with primary PCa.
Collapse
Affiliation(s)
- Yun-Feng Zhang
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, China
| | - Chuan Zhou
- The First Clinical Medical College of Lanzhou University, Lanzhou, 730000, China
| | - Sheng Guo
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, China
| | - Chao Wang
- The First Clinical Medical College of Lanzhou University, Lanzhou, 730000, China
| | - Jin Yang
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, China
| | - Zhi-Jun Yang
- The First Clinical Medical College of Lanzhou University, Lanzhou, 730000, China
| | - Rong Wang
- The First Clinical Medical College of Lanzhou University, Lanzhou, 730000, China
- Department of Nuclear Medicine, Gansu Provincial Hospital, Lanzhou, 730000, China
| | - Xu Zhang
- The First Clinical Medical College of Lanzhou University, Lanzhou, 730000, China
| | - Feng-Hai Zhou
- The First Clinical Medical College of Gansu University of Chinese Medicine, Lanzhou, 730000, China.
- The First Clinical Medical College of Lanzhou University, Lanzhou, 730000, China.
- Department of Urology, Gansu Provincial Hospital, Lanzhou, 730000, China.
| |
Collapse
|
4
|
Haq I, Mazhar T, Asif RN, Ghadi YY, Ullah N, Khan MA, Al-Rasheed A. YOLO and residual network for colorectal cancer cell detection and counting. Heliyon 2024; 10:e24403. [PMID: 38304780 PMCID: PMC10831604 DOI: 10.1016/j.heliyon.2024.e24403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 12/30/2023] [Accepted: 01/08/2024] [Indexed: 02/03/2024] Open
Abstract
The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deep learning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models.
Collapse
Affiliation(s)
- Inayatul Haq
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Tehseen Mazhar
- Department of Computer Science, Virtual University of Pakistan, Lahore, 55150, Pakistan
| | - Rizwana Naz Asif
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Yazeed Yasin Ghadi
- Department of Computer Science and Software Engineering, Al Ain University, Abu Dhabi, 12555, United Arab Emirates
| | - Najib Ullah
- Faculty of Pharmacy and Health Sciences, Department of Pharmacy, University of Balochistan, Quetta, 08770, Pakistan
| | - Muhammad Amir Khan
- School of Computing Sciences, College of Computing, Informatics and Mathematics, Universiti Teknologi MARA, 40450, Shah Alam, Selangor, Malaysia
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|