1
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
2
|
Wang C, Ni M, Tian S, Ouyang H, Liu X, Fan L, Dong P, Jiang L, Lang N, Yuan H. Deep learning model for measuring the sagittal Cobb angle on cervical spine computed tomography. BMC Med Imaging 2023; 23:196. [PMID: 38017414 PMCID: PMC10685593 DOI: 10.1186/s12880-023-01156-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 11/15/2023] [Indexed: 11/30/2023] Open
Abstract
PURPOSES To develop a deep learning (DL) model to measure the sagittal Cobb angle of the cervical spine on computed tomography (CT). MATERIALS AND METHODS Two VB-Net-based DL models for cervical vertebra segmentation and key-point detection were developed. Four-points and line-fitting methods were used to calculate the sagittal Cobb angle automatically. The average value of the sagittal Cobb angle was manually measured by two doctors as the reference standard. The percentage of correct key points (PCK), matched samples t test, intraclass correlation coefficient (ICC), Pearson correlation coefficient, mean absolute error (MAE), and Bland‒Altman plots were used to evaluate the performance of the DL model and the robustness and generalization of the model on the external test set. RESULTS A total of 991 patients were included in the internal data set, and 112 patients were included in the external data set. The PCK of the DL model ranged from 78 to 100% in the test set. The four-points method, line-fitting method, and reference standard measured sagittal Cobb angles were - 1.10 ± 18.29°, 0.30 ± 13.36°, and 0.50 ± 12.83° in the internal test set and 4.55 ± 20.01°, 3.66 ± 18.55°, and 1.83 ± 12.02° in the external test set, respectively. The sagittal Cobb angle calculated by the four-points method and the line-fitting method maintained high consistency with the reference standard (internal test set: ICC = 0.75 and 0.97; r = 0.64 and 0.94; MAE = 5.42° and 3.23°, respectively; external test set: ICC = 0.74 and 0.80, r = 0.66 and 0.974, MAE = 5.25° and 4.68°, respectively). CONCLUSIONS The DL model can accurately measure the sagittal Cobb angle of the cervical spine on CT. The line-fitting method shows a higher consistency with the doctors and a minor average absolute error.
Collapse
Affiliation(s)
- Chunjie Wang
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, China
| | - Ming Ni
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, China
| | - Shuai Tian
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, China
| | - Hanqiang Ouyang
- Department of Orthopedics, Peking University Third Hospital, Beijing, 100191, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, 100191, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, 100191, China
| | - Xiaoming Liu
- Beijing United Imaging Research Institute of Intelligent Imaging, Beijing, 100089, China
| | - Lianxi Fan
- United Imaging Intelligence (Beijing) Co., Ltd., Beijing, 100089, China
| | - Pei Dong
- United Imaging Intelligence (Beijing) Co., Ltd., Beijing, 100089, China
| | - Liang Jiang
- Department of Orthopedics, Peking University Third Hospital, Beijing, 100191, China
- Engineering Research Center of Bone and Joint Precision Medicine, Beijing, 100191, China
- Beijing Key Laboratory of Spinal Disease Research, Beijing, 100191, China
| | - Ning Lang
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|