1
|
Wilson SB, Ward J, Munjal V, Lam CSA, Patel M, Zhang P, Xu DS, Chakravarthy VB. Machine Learning in Spine Oncology: A Narrative Review. Global Spine J 2024:21925682241261342. [PMID: 38860699 DOI: 10.1177/21925682241261342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
STUDY DESIGN Narrative Review. OBJECTIVE Machine learning (ML) is one of the latest advancements in artificial intelligence used in medicine and surgery with the potential to significantly impact the way physicians diagnose, prognose, and treat spine tumors. In the realm of spine oncology, ML is utilized to analyze and interpret medical imaging and classify tumors with incredible accuracy. The authors present a narrative review that specifically addresses the use of machine learning in spine oncology. METHODS This study was conducted in accordance with the Preferred Reporting Items of Systematic Reviews and Meta-Analysis (PRISMA) methodology. A systematic review of the literature in the PubMed, EMBASE, Web of Science, Scopus, and Cochrane Library databases since inception was performed to present all clinical studies with the search terms '[[Machine Learning] OR [Artificial Intelligence]] AND [[Spine Oncology] OR [Spine Cancer]]'. Data included studies that were extracted and included algorithms, training and test size, outcomes reported. Studies were separated based on the type of tumor investigated using the machine learning algorithms into primary, metastatic, both, and intradural. A minimum of 2 independent reviewers conducted the study appraisal, data abstraction, and quality assessments of the studies. RESULTS Forty-five studies met inclusion criteria out of 480 references screened from the initial search results. Studies were grouped by metastatic, primary, and intradural tumors. The majority of ML studies relevant to spine oncology focused on utilizing a mixture of clinical and imaging features to risk stratify mortality and frailty. Overall, these studies showed that ML is a helpful tool in tumor detection, differentiation, segmentation, predicting survival, predicting readmission rates of patients with either primary, metastatic, or intradural spine tumors. CONCLUSION Specialized neural networks and deep learning algorithms have shown to be highly effective at predicting malignant probability and aid in diagnosis. ML algorithms can predict the risk of tumor recurrence or progression based on imaging and clinical features. Additionally, ML can optimize treatment planning, such as predicting radiotherapy dose distribution to the tumor and surrounding normal tissue or in surgical resection planning. It has the potential to significantly enhance the accuracy and efficiency of health care delivery, leading to improved patient outcomes.
Collapse
Affiliation(s)
- Seth B Wilson
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | - Jacob Ward
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | - Vikas Munjal
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | | | - Mayur Patel
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | - Ping Zhang
- Department of Computer Science and Engineering, The Ohio State University College of Engineering, Columbus, OH, USA
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, USA
| | - David S Xu
- Department of Neurosurgery, The Ohio State University, Columbus, OH, USA
| | | |
Collapse
|
2
|
Yuan Y, Pan B, Mo H, Wu X, Long Z, Yang Z, Zhu J, Ming J, Qiu L, Sun Y, Yin S, Zhang F. Deep learning-based computer-aided diagnosis system for the automatic detection and classification of lateral cervical lymph nodes on original ultrasound images of papillary thyroid carcinoma: a prospective diagnostic study. Endocrine 2024:10.1007/s12020-024-03808-1. [PMID: 38570388 DOI: 10.1007/s12020-024-03808-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/26/2024] [Indexed: 04/05/2024]
Abstract
PURPOSE This study aims to develop a deep learning-based computer-aided diagnosis (CAD) system for the automatic detection and classification of lateral cervical lymph nodes (LNs) on original ultrasound images of papillary thyroid carcinoma (PTC) patients. METHODS A retrospective data set of 1801 cervical LN ultrasound images from 1675 patients with PTC and a prospective test set including 185 images from 160 patients were collected. Four different deep leaning models were trained and validated in the retrospective data set. The best model was selected for CAD system development and compared with three sonographers in the retrospective and prospective test sets. RESULTS The Deformable Detection Transformer (DETR) model showed the highest diagnostic efficacy, with a mean average precision score of 86.3% in the retrospective test set, and was therefore used in constructing the CAD system. The detection performance of the CAD system was superior to the junior sonographer and intermediate sonographer with accuracies of 86.3% and 92.4% in the retrospective and prospective test sets, respectively. The classification performance of the CAD system was better than all sonographers with the areas under the curve (AUCs) of 94.4% and 95.2% in the retrospective and prospective test sets, respectively. CONCLUSIONS This study developed a Deformable DETR model-based CAD system for automatically detecting and classifying lateral cervical LNs on original ultrasound images, which showed excellent diagnostic efficacy and clinical utility. It can be an important tool for assisting sonographers in the diagnosis process.
Collapse
Affiliation(s)
- Yuquan Yuan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Bin Pan
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Hongbiao Mo
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Xing Wu
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zhaoxin Long
- College of Computer Science, Chongqing University, Chongqing, China
| | - Zeyu Yang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China
| | - Junping Zhu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Jing Ming
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Lin Qiu
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Yiceng Sun
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China
| | - Supeng Yin
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| | - Fan Zhang
- Department of Breast and Thyroid Surgery, Chongqing General Hospital, Chongqing, China.
- Graduate School of Medicine, Chongqing Medical University, Chongqing, China.
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China.
| |
Collapse
|
3
|
Gu Z, Dai W, Chen J, Jiang Q, Lin W, Wang Q, Chen J, Gu C, Li J, Ying G, Zhu Y. Convolutional neural network-based magnetic resonance image differentiation of filum terminale ependymomas from schwannomas. BMC Cancer 2024; 24:350. [PMID: 38504164 PMCID: PMC10949807 DOI: 10.1186/s12885-024-12023-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 02/20/2024] [Indexed: 03/21/2024] Open
Abstract
PURPOSE Preoperative diagnosis of filum terminale ependymomas (FTEs) versus schwannomas is difficult but essential for surgical planning and prognostic assessment. With the advancement of deep-learning approaches based on convolutional neural networks (CNNs), the aim of this study was to determine whether CNN-based interpretation of magnetic resonance (MR) images of these two tumours could be achieved. METHODS Contrast-enhanced MRI data from 50 patients with primary FTE and 50 schwannomas in the lumbosacral spinal canal were retrospectively collected and used as training and internal validation datasets. The diagnostic accuracy of MRI was determined by consistency with postoperative histopathological examination. T1-weighted (T1-WI), T2-weighted (T2-WI) and contrast-enhanced T1-weighted (CE-T1) MR images of the sagittal plane containing the tumour mass were selected for analysis. For each sequence, patient MRI data were randomly allocated to 5 groups that further underwent fivefold cross-validation to evaluate the diagnostic efficacy of the CNN models. An additional 34 pairs of cases were used as an external test dataset to validate the CNN classifiers. RESULTS After comparing multiple backbone CNN models, we developed a diagnostic system using Inception-v3. In the external test dataset, the per-examination combined sensitivities were 0.78 (0.71-0.84, 95% CI) based on T1-weighted images, 0.79 (0.72-0.84, 95% CI) for T2-weighted images, 0.88 (0.83-0.92, 95% CI) for CE-T1 images, and 0.88 (0.83-0.92, 95% CI) for all weighted images. The combined specificities were 0.72 based on T1-WI (0.66-0.78, 95% CI), 0.84 (0.78-0.89, 95% CI) based on T2-WI, 0.74 (0.67-0.80, 95% CI) for CE-T1, and 0.81 (0.76-0.86, 95% CI) for all weighted images. After all three MRI modalities were merged, the receiver operating characteristic (ROC) curve was calculated, and the area under the curve (AUC) was 0.93, with an accuracy of 0.87. CONCLUSIONS CNN based MRI analysis has the potential to accurately differentiate ependymomas from schwannomas in the lumbar segment.
Collapse
Affiliation(s)
- Zhaowen Gu
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Wenli Dai
- Zhejiang University School of Mathematical Sciences, Hangzhou, Zhejiang, China
| | - Jiarui Chen
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Qixuan Jiang
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Weiwei Lin
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Qiangwei Wang
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Jingyin Chen
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Chi Gu
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China
| | - Jia Li
- Ningbo Medical Center Lihuili Hospital, Department of Neurosurgery, Ningbo University, 1111, Jiangnan Road, Ningbo, Zhejiang, China.
| | - Guangyu Ying
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China.
| | - Yongjian Zhu
- Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, 88, Jiefang Road, Hangzhou, China.
- Clinical Research Center for Neurological Diseases of Zhejiang Province, Hangzhou, China.
| |
Collapse
|
4
|
Charters JA, Luximon D, Petragallo R, Neylon J, Low DA, Lamb JM. Automated detection of vertebral body misalignments in orthogonal kV and MV guided radiotherapy: application to a comprehensive retrospective dataset. Biomed Phys Eng Express 2024; 10:025039. [PMID: 38382110 DOI: 10.1088/2057-1976/ad2baa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 02/21/2024] [Indexed: 02/23/2024]
Abstract
Objective. In image-guided radiotherapy (IGRT), off-by-one vertebral body misalignments are rare but potentially catastrophic. In this study, a novel detection method for such misalignments in IGRT was investigated using densely-connected convolutional networks (DenseNets) for applications towards real-time error prevention and retrospective error auditing.Approach. A total of 4213 images acquired from 527 radiotherapy patients aligned with planar kV or MV radiographs were used to develop and test error-detection software modules. Digitally reconstructed radiographs (DRRs) and setup images were retrieved and co-registered according to the clinically applied alignment contained in the DICOM REG files. A semi-automated algorithm was developed to simulate patient positioning errors on the anterior-posterior (AP) and lateral (LAT) images shifted by one vertebral body. A DenseNet architecture was designed to classify either AP images individually or AP and LAT image pairs. Receiver-operator characteristic curves (ROC) and areas under the curves (AUC) were computed to evaluate the classifiers on test subsets. Subsequently, the algorithm was applied to the entire dataset in order to retrospectively determine the absolute off-by-one vertebral body error rate for planar radiograph guided RT at our institution from 2011-2021.Main results. The AUCs for the kV models were 0.98 for unpaired AP and 0.99 for paired AP-LAT. The AUC for the MV AP model was 0.92. For a specificity of 95%, the paired kV model achieved a sensitivity of 99%. Application of the model to the entire dataset yielded a per-fraction off-by-one vertebral body error rate of 0.044% [0.0022%, 0.21%] for paired kV IGRT including one previously unreported error.Significance. Our error detection algorithm was successful in classifying vertebral body positioning errors with sufficient accuracy for retrospective quality control and real-time error prevention. The reported positioning error rate for planar radiograph IGRT is unique in being determined independently of an error reporting system.
Collapse
Affiliation(s)
- John A Charters
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Dishane Luximon
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Rachel Petragallo
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Jack Neylon
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - Daniel A Low
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| | - James M Lamb
- Department of Radiation Oncology, University of California, Los Angeles, CA 90095, United States of America
| |
Collapse
|
5
|
Ito S, Nakashima H, Segi N, Ouchida J, Oda M, Yamauchi I, Oishi R, Miyairi Y, Mori K, Imagama S. Automated Detection of the Thoracic Ossification of the Posterior Longitudinal Ligament Using Deep Learning and Plain Radiographs. BIOMED RESEARCH INTERNATIONAL 2023; 2023:8495937. [PMID: 38054045 PMCID: PMC10695689 DOI: 10.1155/2023/8495937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 11/05/2023] [Accepted: 11/08/2023] [Indexed: 12/07/2023]
Abstract
Ossification of the ligaments progresses slowly in the initial stages, and most patients are unaware of the disease until obvious myelopathy symptoms appear. Consequently, treatment and clinical outcomes are not satisfactory. This study is aimed at developing an automated system for the detection of the thoracic ossification of the posterior longitudinal ligament (OPLL) using deep learning and plain radiography. We retrospectively reviewed the data of 146 patients with thoracic OPLL and 150 control cases without thoracic OPLL. Plain lateral thoracic radiographs were used for object detection, training, and validation. Thereafter, an object detection system was developed, and its accuracy was calculated. The performance of the proposed system was compared with that of two spine surgeons. The accuracy of the proposed object detection model based on plain lateral thoracic radiographs was 83.4%, whereas the accuracies of spine surgeons 1 and 2 were 80.4% and 77.4%, respectively. Our findings indicate that our automated system, which uses a deep learning-based method based on plain radiographs, can accurately detect thoracic OPLL. This system has the potential to improve the diagnostic accuracy of thoracic OPLL.
Collapse
Affiliation(s)
- Sadayuki Ito
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Hiroaki Nakashima
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Naoki Segi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Jun Ouchida
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Masahiro Oda
- Information Strategy Office, Information and Communications, Nagoya University Nagoya, Japan
| | - Ippei Yamauchi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Ryotaro Oishi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Yuichi Miyairi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Kensaku Mori
- Information Strategy Office, Information and Communications, Nagoya University Nagoya, Japan
- Department of Intelligent Systems, Nagoya University Graduate School of Informatics, Nagoya, Japan
- Research Center for Medical Bigdata, National Institute of Informatics, Tokyo, Japan
| | - Shiro Imagama
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya, Japan
| |
Collapse
|
6
|
Kita K, Fujimori T, Suzuki Y, Kanie Y, Takenaka S, Kaito T, Taki T, Ukon Y, Furuya M, Saiwai H, Nakajima N, Sugiura T, Ishiguro H, Kamatani T, Tsukazaki H, Sakai Y, Takami H, Tateiwa D, Hashimoto K, Wataya T, Nishigaki D, Sato J, Hoshiyama M, Tomiyama N, Okada S, Kido S. Bimodal artificial intelligence using TabNet for differentiating spinal cord tumors-Integration of patient background information and images. iScience 2023; 26:107900. [PMID: 37766987 PMCID: PMC10520519 DOI: 10.1016/j.isci.2023.107900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 02/18/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
We proposed a bimodal artificial intelligence that integrates patient information with images to diagnose spinal cord tumors. Our model combines TabNet, a state-of-the-art deep learning model for tabular data for patient information, and a convolutional neural network for images. As training data, we collected 259 spinal tumor patients (158 for schwannoma and 101 for meningioma). We compared the performance of the image-only unimodal model, table-only unimodal model, bimodal model using a gradient-boosting decision tree, and bimodal model using TabNet. Our proposed bimodal model using TabNet performed best (area under the receiver-operating characteristic curve [AUROC]: 0.91) in the training data and significantly outperformed the physicians' performance. In the external validation using 62 cases from the other two facilities, our bimodal model showed an AUROC of 0.92, proving the robustness of the model. The bimodal analysis using TabNet was effective for differentiating spinal tumors.
Collapse
Affiliation(s)
- Kosuke Kita
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| | - Takahito Fujimori
- Osaka University Graduate School of Medicine Department of Orthopaedic Surgery, Suita, Osaka, Japan
| | - Yuki Suzuki
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| | - Yuya Kanie
- Osaka University Graduate School of Medicine Department of Orthopaedic Surgery, Suita, Osaka, Japan
| | - Shota Takenaka
- Osaka University Graduate School of Medicine Department of Orthopaedic Surgery, Suita, Osaka, Japan
| | - Takashi Kaito
- Osaka University Graduate School of Medicine Department of Orthopaedic Surgery, Suita, Osaka, Japan
| | - Takuyu Taki
- Department of Neurosurgery, Iseikai Hospital, Osaka, Osaka, Japan
| | - Yuichiro Ukon
- Osaka University Graduate School of Medicine Department of Orthopaedic Surgery, Suita, Osaka, Japan
| | | | - Hirokazu Saiwai
- Department of Orthopedic Surgery, Graduate School of Medical Sciences, Kyusyu University, Higashi, Fukuoka, Japan
| | - Nozomu Nakajima
- Japanese Red Cross Society Himeji Hospital, Himeji, Hyogo, Japan
| | - Tsuyoshi Sugiura
- General Incorporated Foundation Sumitomo Hospital, Osaka, Osaka, Japan
| | - Hiroyuki Ishiguro
- National Hospital Organization Osaka National Hospital, Osaka, Osaka, Japan
| | | | | | | | - Haruna Takami
- Osaka International Cancer Institute, Osaka, Osaka, Japan
| | | | | | - Tomohiro Wataya
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| | - Daiki Nishigaki
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| | - Junya Sato
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| | | | - Noriyuki Tomiyama
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| | - Seiji Okada
- Osaka University Graduate School of Medicine Department of Orthopaedic Surgery, Suita, Osaka, Japan
| | - Shoji Kido
- Osaka University School of Medicine Graduate School of Medicine Diagnostic and Interventional Radiology, Suita, Osaka, Japan
| |
Collapse
|
7
|
Ito S, Nakashima H, Segi N, Ouchida J, Oda M, Yamauchi I, Oishi R, Miyairi Y, Mori K, Imagama S. Automated Detection and Diagnosis of Spinal Schwannomas and Meningiomas Using Deep Learning and Magnetic Resonance Imaging. J Clin Med 2023; 12:5075. [PMID: 37568477 PMCID: PMC10419638 DOI: 10.3390/jcm12155075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/24/2023] [Accepted: 07/31/2023] [Indexed: 08/13/2023] Open
Abstract
Spinal cord tumors are infrequently identified spinal diseases that are often difficult to diagnose even with magnetic resonance imaging (MRI) findings. To minimize the probability of overlooking these tumors and improve diagnostic accuracy, an automatic diagnostic system is needed. We aimed to develop an automated system for detecting and diagnosing spinal schwannomas and meningiomas based on deep learning using You Only Look Once (YOLO) version 4 and MRI. In this retrospective diagnostic accuracy study, the data of 50 patients with spinal schwannomas, 45 patients with meningiomas, and 100 control cases were reviewed, respectively. Sagittal T1-weighted (T1W) and T2-weighted (T2W) images were used for object detection, classification, training, and validation. The object detection and diagnosis system was developed using YOLO version 4. The accuracies of the proposed object detections based on T1W, T2W, and T1W + T2W images were 84.8%, 90.3%, and 93.8%, respectively. The accuracies of the object detection for two spine surgeons were 88.9% and 90.1%, respectively. The accuracies of the proposed diagnoses based on T1W, T2W, and T1W + T2W images were 76.4%, 83.3%, and 84.1%, respectively. The accuracies of the diagnosis for two spine surgeons were 77.4% and 76.1%, respectively. We demonstrated an accurate, automated detection and diagnosis of spinal schwannomas and meningiomas using the developed deep learning-based method based on MRI. This system could be valuable in supporting radiological diagnosis of spinal schwannomas and meningioma, with a potential of reducing the radiologist's overall workload.
Collapse
Affiliation(s)
- Sadayuki Ito
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Hiroaki Nakashima
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Naoki Segi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Jun Ouchida
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Masahiro Oda
- Information Strategy Office, Information and Communications, Nagoya University, Nagoya 464-8601, Japan
| | - Ippei Yamauchi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Ryotaro Oishi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Yuichi Miyairi
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| | - Kensaku Mori
- Information Strategy Office, Information and Communications, Nagoya University, Nagoya 464-8601, Japan
- Department of Intelligent Systems, Nagoya University Graduate School of Informatics, Nagoya 464-8601, Japan
- Research Center for Medical Bigdata, National Institute of Informatics, Tokyo 101-8430, Japan
| | - Shiro Imagama
- Department of Orthopedic Surgery, Nagoya University Graduate School of Medicine, Nagoya 466-8560, Japan (Y.M.)
| |
Collapse
|
8
|
Koechli C, Zwahlen DR, Schucht P, Windisch P. Radiomics and machine learning for predicting the consistency of benign tumors of the central nervous system: A systematic review. Eur J Radiol 2023; 164:110866. [PMID: 37207398 DOI: 10.1016/j.ejrad.2023.110866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/21/2023]
Abstract
PURPOSE Predicting the consistency of benign central nervous system (CNS) tumors prior to surgery helps to improve surgical outcomes. This review summarizes and analyzes the literature on using radiomics and/or machine learning (ML) for consistency prediction. METHOD The Medical Literature Analysis and Retrieval System Online (MEDLINE) database was screened for studies published in English from January 1st 2000. Data was extracted according to the PRISMA guidelines and quality of the studies was assessed in compliance with the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2). RESULTS Eight publications were included focusing on pituitary macroadenomas (n = 5), pituitary adenomas (n = 1), and meningiomas (n = 2) using a retrospective (n = 6), prospective (n = 1), and unknown (n = 1) study design with a total of 763 patients for the consistency prediction. The studies reported an area under the curve (AUC) of 0.71-0.99 for their respective best performing model regarding the consistency prediction. Of all studies, four articles validated their models internally whereas none validated their models externally. Two articles stated making data available on request with the remaining publications lacking information with regard to data availability. CONCLUSIONS The research on consistency prediction of CNS tumors is still at an early stage regarding the use of radiomics and different ML techniques. Best-practice procedures regarding radiomics and ML need to be followed more rigorously to facilitate the comparison between publications and, accordingly, the possible implementation into clinical practice in the future.
Collapse
Affiliation(s)
- Carole Koechli
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland; Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland.
| | - Daniel R Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| | - Philippe Schucht
- Universitätsklinik für Neurochirurgie, Bern University Hospital, 3010 Bern, Switzerland
| | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8401 Winterthur, Switzerland
| |
Collapse
|
9
|
Wang W, Wang Y. Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer. Diagnostics (Basel) 2023; 13:diagnostics13091582. [PMID: 37174975 PMCID: PMC10177566 DOI: 10.3390/diagnostics13091582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/27/2023] [Accepted: 04/09/2023] [Indexed: 05/15/2023] Open
Abstract
Computer-aided methods have been extensively applied for diagnosing breast lesions with magnetic resonance imaging (MRI), but fully-automatic diagnosis using deep learning is rarely documented. Deep-learning-technology-based artificial intelligence (AI) was used in this work to classify and diagnose breast cancer based on MRI images. Breast cancer MRI images from the Rider Breast MRI public dataset were converted into processable joint photographic expert group (JPG) format images. The location and shape of the lesion area were labeled using the Labelme software. A difficult-sample mining mechanism was introduced to improve the performance of the YOLACT algorithm model as a modified YOLACT algorithm model. Diagnostic efficacy was compared with the Mask R-CNN algorithm model. The deep learning framework was based on PyTorch version 1.0. Four thousand and four hundred labeled data with corresponding lesions were labeled as normal samples, and 1600 images with blurred lesion areas as difficult samples. The modified YOLACT algorithm model achieved higher accuracy and better classification performance than the YOLACT model. The detection accuracy of the modified YOLACT algorithm model with the difficult-sample-mining mechanism is improved by nearly 3% for common and difficult sample images. Compared with Mask R-CNN, it is still faster in running speed, and the difference in recognition accuracy is not obvious. The modified YOLACT algorithm had a classification accuracy of 98.5% for the common sample test set and 93.6% for difficult samples. We constructed a modified YOLACT algorithm model, which is superior to the YOLACT algorithm model in diagnosis and classification accuracy.
Collapse
Affiliation(s)
- Wei Wang
- College of Computer Science and Technology, Guizhou University, Guiyang 550001, China
- Institute for Artificial Intelligence, Guizhou University, Guiyang 550001, China
- Guizhou Provincial People's Hospital, Guiyang 550001, China
| | - Yisong Wang
- College of Computer Science and Technology, Guizhou University, Guiyang 550001, China
- Institute for Artificial Intelligence, Guizhou University, Guiyang 550001, China
| |
Collapse
|
10
|
Petragallo R, Bertram P, Halvorsen P, Iftimia I, Low DA, Morin O, Narayanasamy G, Saenz DL, Sukumar KN, Valdes G, Weinstein L, Wells MC, Ziemer BP, Lamb JM. Development and multi-institutional validation of a convolutional neural network to detect vertebral body mis-alignments in 2D x-ray setup images. Med Phys 2023; 50:2662-2671. [PMID: 36908243 DOI: 10.1002/mp.16359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 02/11/2023] [Accepted: 02/16/2023] [Indexed: 03/14/2023] Open
Abstract
BACKGROUND Misalignment to the incorrect vertebral body remains a rare but serious patient safety risk in image-guided radiotherapy (IGRT). PURPOSE Our group has proposed that an automated image-review algorithm be inserted into the IGRT process as an interlock to detect off-by-one vertebral body errors. This study presents the development and multi-institutional validation of a convolutional neural network (CNN)-based approach for such an algorithm using patient image data from a planar stereoscopic x-ray IGRT system. METHODS X-rays and digitally reconstructed radiographs (DRRs) were collected from 429 spine radiotherapy patients (1592 treatment fractions) treated at six institutions using a stereoscopic x-ray image guidance system. Clinically-applied, physician approved, alignments were used for true-negative, "no-error" cases. "Off-by-one vertebral body" errors were simulated by translating DRRs along the spinal column using a semi-automated method. A leave-one-institution-out approach was used to estimate model accuracy on data from unseen institutions as follows: All of the images from five of the institutions were used to train a CNN model from scratch using a fixed network architecture and hyper-parameters. The size of this training set ranged from 5700 to 9372 images, depending on exactly which five institutions were contributing data. The training set was randomized and split using a 75/25 split into the final training/ validation sets. X-ray/ DRR image pairs and the associated binary labels of "no-error" or "shift" were used as the model input. Model accuracy was evaluated using images from the sixth institution, which were left out of the training phase entirely. This test set ranged from 180 to 3852 images, again depending on which institution had been left out of the training phase. The trained model was used to classify the images from the test set as either "no-error" or "shifted", and the model predictions were compared to the ground truth labels to assess the model accuracy. This process was repeated until each institution's images had been used as the testing dataset. RESULTS When the six models were used to classify unseen image pairs from the institution left out during training, the resulting receiver operating characteristic area under the curve values ranged from 0.976 to 0.998. With the specificity fixed at 99%, the corresponding sensitivities ranged from 61.9% to 99.2% (mean: 77.6%). With the specificity fixed at 95%, sensitivities ranged from 85.5% to 99.8% (mean: 92.9%). CONCLUSION This study demonstrated the CNN-based vertebral body misalignment model is robust when applied to previously unseen test data from an outside institution, indicating that this proposed additional safeguard against misalignment is feasible.
Collapse
Affiliation(s)
- Rachel Petragallo
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| | | | - Per Halvorsen
- Department of Radiation Oncology, Beth Israel - Lahey Health, Burlington, Massachusetts, USA
| | - Ileana Iftimia
- Department of Radiation Oncology, Beth Israel - Lahey Health, Burlington, Massachusetts, USA
| | - Daniel A Low
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| | - Olivier Morin
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA
| | - Ganesh Narayanasamy
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Daniel L Saenz
- Department of Radiation Oncology, University of Texas HSC SA, San Antonio, Texas, USA
| | - Kevinraj N Sukumar
- Department of Radiation Oncology, Piedmont Healthcare, Atlanta, Georgia, USA
| | - Gilmer Valdes
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA
| | - Lauren Weinstein
- Department of Radiation Oncology, Kaiser Permanente, South San Francisco, California, USA
| | - Michelle C Wells
- Department of Radiation Oncology, Piedmont Healthcare, Atlanta, Georgia, USA
| | - Benjamin P Ziemer
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA
| | - James M Lamb
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| |
Collapse
|
11
|
Katsos K, Johnson SE, Ibrahim S, Bydon M. Current Applications of Machine Learning for Spinal Cord Tumors. Life (Basel) 2023; 13:life13020520. [PMID: 36836877 PMCID: PMC9962966 DOI: 10.3390/life13020520] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/01/2023] [Accepted: 02/06/2023] [Indexed: 02/16/2023] Open
Abstract
Spinal cord tumors constitute a diverse group of rare neoplasms associated with significant mortality and morbidity that pose unique clinical and surgical challenges. Diagnostic accuracy and outcome prediction are critical for informed decision making and can promote personalized medicine and facilitate optimal patient management. Machine learning has the ability to analyze and combine vast amounts of data, allowing the identification of patterns and the establishment of clinical associations, which can ultimately enhance patient care. Although artificial intelligence techniques have been explored in other areas of spine surgery, such as spinal deformity surgery, precise machine learning models for spinal tumors are lagging behind. Current applications of machine learning in spinal cord tumors include algorithms that improve diagnostic precision by predicting genetic, molecular, and histopathological profiles. Furthermore, artificial intelligence-based systems can assist surgeons with preoperative planning and surgical resection, potentially reducing the risk of recurrence and consequently improving clinical outcomes. Machine learning algorithms promote personalized medicine by enabling prognostication and risk stratification based on accurate predictions of treatment response, survival, and postoperative complications. Despite their promising potential, machine learning models require extensive validation processes and quality assessments to ensure safe and effective translation to clinical practice.
Collapse
Affiliation(s)
- Konstantinos Katsos
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
| | - Sarah E. Johnson
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
| | - Sufyan Ibrahim
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
| | - Mohamad Bydon
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, MN 55902, USA
- Correspondence:
| |
Collapse
|
12
|
Tian G, Xu D, He Y, Chai W, Deng Z, Cheng C, Jin X, Wei G, Zhao Q, Jiang T. Deep learning for real-time auxiliary diagnosis of pancreatic cancer in endoscopic ultrasonography. Front Oncol 2022; 12:973652. [PMID: 36276094 PMCID: PMC9586286 DOI: 10.3389/fonc.2022.973652] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 09/21/2022] [Indexed: 11/13/2022] Open
Abstract
In recent year, many deep learning have been playing an important role in the detection of cancers. This study aimed to real-timely differentiate a pancreatic cancer (PC) or a non-pancreatic cancer (NPC) lesion via endoscopic ultrasonography (EUS) image. A total of 1213 EUS images from 157 patients (99 male, 58 female) with pancreatic disease were used for training, validation and test groups. Before model training, regions of interest (ROIs) were manually drawn to mark the PC and NPC lesions using Labelimage software. Yolov5m was used as the algorithm model to automatically distinguish the presence of pancreatic lesion. After training the model based on EUS images using YOLOv5, the parameters achieved convergence within 300 rounds (GIoU Loss: 0.01532, Objectness Loss: 0.01247, precision: 0.713 and recall: 0.825). For the validation group, the mAP0.5 was 0.831, and mAP@.5:.95 was 0.512. In addition, the receiver operating characteristic (ROC) curve analysis showed this model seemed to have a trend of more AUC of 0.85 (0.665 to 0.956) than the area under the curve (AUC) of 0.838 (0.65 to 0.949) generated by physicians using EUS detection without puncture, although pairwise comparison of ROC curves showed that the AUC between the two groups was not significant (z= 0.15, p = 0.8804). This study suggested that the YOLOv5m would generate attractive results and allow for the real-time decision support for distinction of a PC or a NPC lesion.
Collapse
Affiliation(s)
- Guo Tian
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Pulsed Power Translational Medicine of Zhejiang Province, Hangzhou, China
- State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Danxia Xu
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Pulsed Power Translational Medicine of Zhejiang Province, Hangzhou, China
| | - Yinghua He
- Department of Clinical Pharmacy, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Zhejiang Provincial Key Laboratory for Drug Evaluation and Clinical Research, Hangzhou, China
| | - Weilu Chai
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Pulsed Power Translational Medicine of Zhejiang Province, Hangzhou, China
| | - Zhuang Deng
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Chao Cheng
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xinyan Jin
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Guyue Wei
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Qiyu Zhao
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Pulsed Power Translational Medicine of Zhejiang Province, Hangzhou, China
| | - Tianan Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Key Laboratory of Pulsed Power Translational Medicine of Zhejiang Province, Hangzhou, China
- Zhejiang University Cancer Center, Hangzhou, China
- *Correspondence: Tianan Jiang,
| |
Collapse
|
13
|
Qu B, Cao J, Qian C, Wu J, Lin J, Wang L, Ou-Yang L, Chen Y, Yan L, Hong Q, Zheng G, Qu X. Current development and prospects of deep learning in spine image analysis: a literature review. Quant Imaging Med Surg 2022; 12:3454-3479. [PMID: 35655825 PMCID: PMC9131328 DOI: 10.21037/qims-21-939] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 03/04/2022] [Indexed: 10/07/2023]
Abstract
BACKGROUND AND OBJECTIVE As the spine is pivotal in the support and protection of human bodies, much attention is given to the understanding of spinal diseases. Quick, accurate, and automatic analysis of a spine image greatly enhances the efficiency with which spine conditions can be diagnosed. Deep learning (DL) is a representative artificial intelligence technology that has made encouraging progress in the last 6 years. However, it is still difficult for clinicians and technicians to fully understand this rapidly evolving field due to the diversity of applications, network structures, and evaluation criteria. This study aimed to provide clinicians and technicians with a comprehensive understanding of the development and prospects of DL spine image analysis by reviewing published literature. METHODS A systematic literature search was conducted in the PubMed and Web of Science databases using the keywords "deep learning" and "spine". Date ranges used to conduct the search were from 1 January, 2015 to 20 March, 2021. A total of 79 English articles were reviewed. KEY CONTENT AND FINDINGS The DL technology has been applied extensively to the segmentation, detection, diagnosis, and quantitative evaluation of spine images. It uses static or dynamic image information, as well as local or non-local information. The high accuracy of analysis is comparable to that achieved manually by doctors. However, further exploration is needed in terms of data sharing, functional information, and network interpretability. CONCLUSIONS The DL technique is a powerful method for spine image analysis. We believe that, with the joint efforts of researchers and clinicians, intelligent, interpretable, and reliable DL spine analysis methods will be widely applied in clinical practice in the future.
Collapse
Affiliation(s)
- Biao Qu
- Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen, China
| | - Jianpeng Cao
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Chen Qian
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Jinyu Wu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Jianzhong Lin
- Department of Radiology, Zhongshan Hospital of Xiamen University, Xiamen, China
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China
| | - Lin Ou-Yang
- Department of Medical Imaging of Southeast Hospital, Medical College of Xiamen University, Zhangzhou, China
| | - Yongfa Chen
- Department of Pediatric Orthopedic Surgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Liyue Yan
- Department of Information & Computational Mathematics, Xiamen University, Xiamen, China
| | - Qing Hong
- Biomedical Intelligent Cloud R&D Center, China Mobile Group, Xiamen, China
| | - Gaofeng Zheng
- Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| |
Collapse
|
14
|
Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:cancers14112676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
15
|
Primary Benign Tumors of the Spinal Canal. World Neurosurg 2022; 164:178-198. [PMID: 35552036 DOI: 10.1016/j.wneu.2022.04.135] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 04/29/2022] [Accepted: 04/30/2022] [Indexed: 11/23/2022]
Abstract
Benign tumors that grow in the spinal canal are heterogeneous neoplasms with low incidence; from these, meningiomas and nerve sheath tumors (neurofibromas and schwannomas) account for 60%-70% of all primary spinal tumors. Benign spinal canal tumors provoke nonspecific clinical manifestations, mostly related to the affected level of the spinal cord. These tumors present a challenge for the patient and healthcare professionals, for they are often difficult to diagnose and the high frequency of posttreatment complications. In this review, we describe the epidemiology, risk factors, clinical features, diagnosis, histopathology, molecular biology, and treatment of extramedullary benign meningiomas, osteoid osteomas, osteoblastomas, aneurysmal bone cysts, osteochondromas, neurofibromas, giant cell tumors of the bone, eosinophilic granulomas, hemangiomas, lipomas, and schwannomas located in the spine, as well as possible future targets that could lead to an improvement in their management.
Collapse
|
16
|
Ouyang H, Meng F, Liu J, Song X, Li Y, Yuan Y, Wang C, Lang N, Tian S, Yao M, Liu X, Yuan H, Jiang S, Jiang L. Evaluation of Deep Learning-Based Automated Detection of Primary Spine Tumors on MRI Using the Turing Test. Front Oncol 2022; 12:814667. [PMID: 35359400 PMCID: PMC8962659 DOI: 10.3389/fonc.2022.814667] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/16/2022] [Indexed: 01/04/2023] Open
Abstract
BackgroundRecently, the Turing test has been used to investigate whether machines have intelligence similar to humans. Our study aimed to assess the ability of an artificial intelligence (AI) system for spine tumor detection using the Turing test.MethodsOur retrospective study data included 12179 images from 321 patients for developing AI detection systems and 6635 images from 187 patients for the Turing test. We utilized a deep learning-based tumor detection system with Faster R-CNN architecture, which generates region proposals by Region Proposal Network in the first stage and corrects the position and the size of the bounding box of the lesion area in the second stage. Each choice question featured four bounding boxes enclosing an identical tumor. Three were detected by the proposed deep learning model, whereas the other was annotated by a doctor; the results were shown to six doctors as respondents. If the respondent did not correctly identify the image annotated by a human, his answer was considered a misclassification. If all misclassification rates were >30%, the respondents were considered unable to distinguish the AI-detected tumor from the human-annotated one, which indicated that the AI system passed the Turing test.ResultsThe average misclassification rates in the Turing test were 51.2% (95% CI: 45.7%–57.5%) in the axial view (maximum of 62%, minimum of 44%) and 44.5% (95% CI: 38.2%–51.8%) in the sagittal view (maximum of 59%, minimum of 36%). The misclassification rates of all six respondents were >30%; therefore, our AI system passed the Turing test.ConclusionOur proposed intelligent spine tumor detection system has a similar detection ability to annotation doctors and may be an efficient tool to assist radiologists or orthopedists in primary spine tumor detection.
Collapse
Affiliation(s)
- Hanqiang Ouyang
- Department of Orthopaedics, Peking University Third Hospital, Beijing, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, China
| | - Fanyu Meng
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jianfang Liu
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Xinhang Song
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Yuan Li
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Yuan Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Chunjie Wang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Shuai Tian
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Meiyi Yao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoguang Liu
- Department of Orthopaedics, Peking University Third Hospital, Beijing, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
- *Correspondence: Huishu Yuan, ; Shuqiang Jiang, ; Liang Jiang,
| | - Shuqiang Jiang
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- *Correspondence: Huishu Yuan, ; Shuqiang Jiang, ; Liang Jiang,
| | - Liang Jiang
- Department of Orthopaedics, Peking University Third Hospital, Beijing, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, China
- *Correspondence: Huishu Yuan, ; Shuqiang Jiang, ; Liang Jiang,
| |
Collapse
|