1
|
Dai M, Tiu BC, Schlossman J, Ayobi A, Castineira C, Kiewsky J, Avare C, Chaibi Y, Chang P, Chow D, Soun JE. Validation of a Deep Learning Tool for Detection of Incidental Vertebral Compression Fractures. J Comput Assist Tomogr 2025:00004728-990000000-00417. [PMID: 39876529 DOI: 10.1097/rct.0000000000001726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 12/07/2024] [Indexed: 01/30/2025]
Abstract
OBJECTIVE This study evaluated the performance of a deep learning-based vertebral compression fracture (VCF) detection tool in patients with incidental VCF. The purpose of this study was to validate this tool across multiple sites and multiple vendors. METHODS This was a retrospective, multicenter, multinational blinded study using anonymized chest and abdominal CT scans performed for indications other than VCF in patients ≥50 years old. Images were obtained from 2 teleradiology companies in France and United States and were processed by CINA-VCF v1.0, a deep learning algorithm designed for VCF detection. Ground truth was established by majority consensus across 3 board-certified radiologists. Overall performance of CINA-VCF was evaluated, as well as subset analyses based on imaging acquisition parameters, baseline patient characteristics, and VCF severity. A subgroup was also analyzed and compared with available clinical radiology reports. RESULTS Four hundred seventy-four CT scans were included in this study, comprising 166 (35.0%) positive and 308 (65.0%) negative VCF cases. CINA-VCF demonstrated an area under the curve (AUC) of 0.97 (95% CI: 0.96-0.99), accuracy of 93.7% (95% CI: 91.1%-95.7%), sensitivity of 95.2% (95% CI: 90.7%-97.9%), and specificity of 92.9% (95% CI: 89.4%-96.5%). Subset analysis based on VCF severity resulted in a specificity of 94.2% (95% CI: 90.9%-96.6%) for grade 0 negative cases and a specificity of 64.3% (95% CI: 35.1%-87.2%) for grade 1 negative cases. For grades 2 and 3 positive cases, sensitivity was 89.7% (95% CI: 79.9%-95.8%) and 99.0% (95% CI: 94.4%-100.0%), respectively. CONCLUSIONS CINA-VCF successfully detected incidental VCF and even outperformed clinical reports. The performance was consistent among all subgroups analyzed. Limitations of the tool included various confounding pathologies such as Schmorl's nodes and borderline cases. Despite these limitations, this study validates the applicability and generalizability of the tool in the clinical setting.
Collapse
Affiliation(s)
- Michelle Dai
- Irvine School of Medicine, University of California, Irvine, CA
- Touro University Nevada, College of Osteopathic Medicine, Henderson, NV
| | | | | | | | | | | | | | | | - Peter Chang
- Department of Radiological Sciences
- Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, CA
| | - Daniel Chow
- Department of Radiological Sciences
- Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, CA
| | | |
Collapse
|
2
|
Lee J, Kim M, Park H, Yang Z, Woo OH, Kang WY, Kim JH. Enhanced Detection Performance of Acute Vertebral Compression Fractures Using a Hybrid Deep Learning and Traditional Quantitative Measurement Approach: Beyond the Limitations of Genant Classification. Bioengineering (Basel) 2025; 12:64. [PMID: 39851338 PMCID: PMC11761558 DOI: 10.3390/bioengineering12010064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2024] [Revised: 01/05/2025] [Accepted: 01/09/2025] [Indexed: 01/26/2025] Open
Abstract
OBJECTIVE This study evaluated the applicability of the classical method, height loss ratio (HLR), for identifying major acute compression fractures in clinical practice and compared its performance with deep learning (DL)-based VCF detection methods. Additionally, it examined whether combining the HLR with DL approaches could enhance performance, exploring the potential integration of classical and DL methodologies. METHODS End-to-End VCF Detection (EEVD), Two-Stage VCF Detection with Segmentation and Detection (TSVD_SD), and Two-Stage VCF Detection with Detection and Classification (TSVD_DC). The models were evaluated on a dataset of 589 patients, focusing on sensitivity, specificity, accuracy, and precision. RESULTS TSVD_SD outperformed all other methods, achieving the highest sensitivity (84.46%) and accuracy (95.05%), making it particularly effective for identifying true positives. The complementary use of DL methods with HLR further improved detection performance. For instance, combining HLR-negative cases with TSVD_SD increased sensitivity to 87.84%, reducing missed fractures, while combining HLR-positive cases with EEVD achieved the highest specificity (99.77%), minimizing false positives. CONCLUSION These findings demonstrated that DL-based approaches, particularly TSVD_SD, provided robust alternatives or complements to traditional methods, significantly enhancing diagnostic accuracy for acute VCFs in clinical practice.
Collapse
Affiliation(s)
- Jemyoung Lee
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea;
- ClariPi Research, ClariPi Inc., Seoul 03088, Republic of Korea;
| | - Minbeom Kim
- ClariPi Research, ClariPi Inc., Seoul 03088, Republic of Korea;
| | - Heejun Park
- Department of Radiology, Korea University Guro Hospital, Seoul 08308, Republic of Korea; (H.P.); (Z.Y.); (O.H.W.)
| | - Zepa Yang
- Department of Radiology, Korea University Guro Hospital, Seoul 08308, Republic of Korea; (H.P.); (Z.Y.); (O.H.W.)
| | - Ok Hee Woo
- Department of Radiology, Korea University Guro Hospital, Seoul 08308, Republic of Korea; (H.P.); (Z.Y.); (O.H.W.)
| | - Woo Young Kang
- Department of Radiology, Korea University Guro Hospital, Seoul 08308, Republic of Korea; (H.P.); (Z.Y.); (O.H.W.)
| | - Jong Hyo Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea;
- ClariPi Research, ClariPi Inc., Seoul 03088, Republic of Korea;
- Department of Radiology, Seoul National University College of Medicine, Seoul 03080, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul 03080, Republic of Korea
- Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Suwon 16229, Republic of Korea
| |
Collapse
|
3
|
Yang Y, Wang Y, Liu T, Wang M, Sun M, Song S, Fan W, Huang G. Anatomical prior-based vertebral landmark detection for spinal disorder diagnosis. Artif Intell Med 2025; 159:103011. [PMID: 39612522 DOI: 10.1016/j.artmed.2024.103011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 07/25/2024] [Accepted: 11/02/2024] [Indexed: 12/01/2024]
Abstract
As one of fundamental ways to interpret spine images, detection of vertebral landmarks is an informative prerequisite for further diagnosis and management of spine disorders such as scoliosis and fractures. Most existing machine learning-based methods for automatic vertebral landmark detection suffer from overlapping landmarks or abnormally long distances between nearby landmarks against anatomical priors, and thus lack sufficient reliability and interpretability. To tackle the problem, this paper systematically utilizes anatomical prior knowledge in vertebral landmark detection. We explicitly formulate anatomical priors of the spine, related to distances among vertebrae and spatial order within the spine, and integrate these geometrical constraints within training loss, inference procedure, and evaluation metrics. First, we introduce an anatomy-constraint loss to regularize the training process with the aforementioned contextual priors explicitly. Second, we propose a simple-yet-effective anatomy-aided inference procedure by employing sequential prediction rather than a parallel counterpart. Third, we provide novel anatomy-related metrics to quantitatively evaluate to which extent landmark predictions follow the anatomical priors, as is not reflected within the widely-used landmark localization error metric. We employ the localization framework on 1410 anterior-posterior radiographic images. Compared with competitive baseline models, we achieve superior landmark localization accuracy and comparable Cobb angle estimation for scoliosis assessment. Ablation studies demonstrate the effectiveness of designed components on the decrease of localization error and improvement of anatomical plausibility. Additionally, we exhibit effective generalization performance by transferring our detection method onto sagittal 2-D slices of CT scans and boost the performance of downstream compression fracture classification at vertebra-level.
Collapse
Affiliation(s)
- Yukang Yang
- Department of Automation, BNRist, Tsinghua University, Beijing, 100084, China.
| | - Yu Wang
- Department of Orthopaedics, Peking University First Hospital, Beijing, 100034, China.
| | - Tianyu Liu
- Department of Automation, BNRist, Tsinghua University, Beijing, 100084, China.
| | - Miao Wang
- Department of Orthopaedics, Aarhus University Hospital, Aarhus, 8200, Denmark.
| | - Ming Sun
- Department of Orthopaedics, Aarhus University Hospital, Aarhus, 8200, Denmark.
| | - Shiji Song
- Department of Automation, BNRist, Tsinghua University, Beijing, 100084, China.
| | - Wenhui Fan
- Department of Automation, BNRist, Tsinghua University, Beijing, 100084, China.
| | - Gao Huang
- Department of Automation, BNRist, Tsinghua University, Beijing, 100084, China; Beijing Academy of Artificial Intelligence, Beijing, 100084, China.
| |
Collapse
|
4
|
Kim S, Kim I, Yuh WT, Han S, Kim C, Ko YS, Cho W, Park SB. Augmented prediction of vertebral collapse after osteoporotic vertebral compression fractures through parameter-efficient fine-tuning of biomedical foundation models. Sci Rep 2024; 14:31820. [PMID: 39738257 DOI: 10.1038/s41598-024-82902-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 12/10/2024] [Indexed: 01/01/2025] Open
Abstract
Vertebral collapse (VC) following osteoporotic vertebral compression fracture (OVCF) often requires aggressive treatment, necessitating an accurate prediction for early intervention. This study aimed to develop a predictive model leveraging deep neural networks to predict VC progression after OVCF using magnetic resonance imaging (MRI) and clinical data. Among 245 enrolled patients with acute OVCF, data from 200 patients were used for the development dataset, and data from 45 patients were used for the test dataset. To construct an accurate prediction model, we explored two backbone architectures: convolutional neural networks and vision transformers (ViTs), along with various pre-trained weights and fine-tuning methods. Through extensive experiments, we built our model by performing parameter-efficient fine-tuning of a ViT model pre-trained on a large-scale biomedical dataset. Attention rollouts indicated that the contours and internal features of the compressed vertebral body were critical in predicting VC with this model. To further improve the prediction performance of our model, we applied the augmented prediction strategy, which uses multiple MRI frames and achieves a significantly higher area under the curve (AUC). Our findings suggest that employing a biomedical foundation model fine-tuned using a parameter-efficient method, along with augmented prediction, can significantly enhance medical decisions.
Collapse
Affiliation(s)
- Sibeen Kim
- School of Biomedical Engineering, Korea University, Seoul, Republic of Korea
| | - Inkyeong Kim
- Department of Neurosurgery, Kangwon National University Hospital, Chuncheon-si, Gangwon-do, Republic of Korea
- Department of Neurosurgery, Kangwon National University College of Medicine, Chuncheon-si, Gangwon-do, Republic of Korea
| | - Woon Tak Yuh
- Department of Neurosurgery, Hallym University College of Medicine, Chuncheon-si, Gangwon-do, Republic of Korea
- Department of Neurosurgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong-si, Gyeonggi-do, Republic of Korea
| | - Sangmin Han
- Department of Intelligence Convergence, Yonsei University, Seoul, Republic of Korea
| | - Choonghyo Kim
- Department of Neurosurgery, Kangwon National University Hospital, Chuncheon-si, Gangwon-do, Republic of Korea
- Department of Neurosurgery, Kangwon National University College of Medicine, Chuncheon-si, Gangwon-do, Republic of Korea
| | - Young San Ko
- Department of Neurosurgery, Kyungpook National University Hospital, 130 Dongdeok-ro, Daegu, 41944, Republic of Korea
- Department of Neurosurgery, School of Medicine, Kyungbook National university, Daegu, Republic of Korea
| | - Wonwoo Cho
- Kim Jaechul Graduate School of Artificial Intelligence, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
- Letsur Inc, 27, Teheran-ro 2-gil, Gangnam-gu, Seoul, Republic of Korea.
| | - Sung Bae Park
- Department of Medical Device Development, Seoul National University College of Medicine, Seoul, Republic of Korea.
- Department of Neurosurgery, Seoul National Boramae Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Namireddy SR, Gill SS, Peerbhai A, Kamath AG, Ramsay DSC, Ponniah HS, Salih A, Jankovic D, Kalasauskas D, Neuhoff J, Kramer A, Russo S, Thavarajasingam SG. Artificial intelligence in risk prediction and diagnosis of vertebral fractures. Sci Rep 2024; 14:30560. [PMID: 39702597 DOI: 10.1038/s41598-024-75628-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 10/07/2024] [Indexed: 12/21/2024] Open
Abstract
With the increasing prevalence of vertebral fractures, accurate diagnosis and prognostication are essential. This study assesses the effectiveness of AI in diagnosing and predicting vertebral fractures through a systematic review and meta-analysis. A comprehensive search across major databases selected studies utilizing AI for vertebral fracture diagnosis or prognosis. Out of 14,161 studies initially identified, 79 were included, with 40 undergoing meta-analysis. Diagnostic models were stratified by pathology: non-pathological vertebral fractures, osteoporotic vertebral fractures, and vertebral compression fractures. The primary outcome measure was AUROC. AI showed high accuracy in diagnosing and predicting vertebral fractures: predictive AUROC = 0.82, osteoporotic vertebral fracture diagnosis AUROC = 0.92, non-pathological vertebral fracture diagnosis AUROC = 0.85, and vertebral compression fracture diagnosis AUROC = 0.87, all significant (p < 0.001). Traditional models had the highest median AUROC (0.90) for fracture prediction, while deep learning models excelled in diagnosing all fracture types. High heterogeneity (I² > 99%, p < 0.001) indicated significant variation in model design and performance. AI technologies show considerable promise in improving the diagnosis and prognostication of vertebral fractures, with high accuracy. However, observed heterogeneity and study biases necessitate further research. Future efforts should focus on standardizing AI models and validating them across diverse datasets to ensure clinical utility.
Collapse
Affiliation(s)
- Srikar R Namireddy
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Saran S Gill
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Amaan Peerbhai
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Abith G Kamath
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Daniele S C Ramsay
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Hariharan Subbiah Ponniah
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Ahmed Salih
- Imperial Brain & Spine Initiative, Imperial College London, London, UK
- Faculty of Medicine, Imperial College London, London, UK
| | - Dragan Jankovic
- Department of Neurosurgery, University Medical Center Mainz, Langenbeckstraße 1, Mainz, Germany
| | - Darius Kalasauskas
- Department of Neurosurgery, University Medical Center Mainz, Langenbeckstraße 1, Mainz, Germany
| | - Jonathan Neuhoff
- Center for Spinal Surgery and Neurotraumatology, Berufsgenossenschaftliche Unfallklinik Frankfurt am Main, Frankfurt, Germany
| | - Andreas Kramer
- Department of Neurosurgery, University Medical Center Mainz, Langenbeckstraße 1, Mainz, Germany
| | - Salvatore Russo
- Department of Neurosurgery, Imperial College Healthcare NHS Trust, London, UK
| | - Santhosh G Thavarajasingam
- Imperial Brain & Spine Initiative, Imperial College London, London, UK.
- Department of Neurosurgery, University Medical Center Mainz, Langenbeckstraße 1, Mainz, Germany.
| |
Collapse
|
6
|
Syed S, Ahmed R, Iqbal A, Ahmad N, Alshara MA. MediScan: A Framework of U-Health and Prognostic AI Assessment on Medical Imaging. J Imaging 2024; 10:322. [PMID: 39728219 DOI: 10.3390/jimaging10120322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Revised: 10/21/2024] [Accepted: 10/29/2024] [Indexed: 12/28/2024] Open
Abstract
With technological advancements, remarkable progress has been made with the convergence of health sciences and Artificial Intelligence (AI). Modern health systems are proposed to ease patient diagnostics. However, the challenge is to provide AI-based precautions to patients and doctors for more accurate risk assessment. The proposed healthcare system aims to integrate patients, doctors, laboratories, pharmacies, and administrative personnel use cases and their primary functions onto a single platform. The proposed framework can also process microscopic images, CT scans, X-rays, and MRI to classify malignancy and give doctors a set of AI precautions for patient risk assessment. The proposed framework incorporates various DCNN models for identifying different forms of tumors and fractures in the human body i.e., brain, bones, lungs, kidneys, and skin, and generating precautions with the help of the Fined-Tuned Large Language Model (LLM) i.e., Generative Pretrained Transformer 4 (GPT-4). With enough training data, DCNN can learn highly representative, data-driven, hierarchical image features. The GPT-4 model is selected for generating precautions due to its explanation, reasoning, memory, and accuracy on prior medical assessments and research studies. Classification models are evaluated by classification report (i.e., Recall, Precision, F1 Score, Support, Accuracy, and Macro and Weighted Average) and confusion matrix and have shown robust performance compared to the conventional schemes.
Collapse
Affiliation(s)
- Sibtain Syed
- School of Computing Sciences, Pak-Austria Fachhochschule Institute of Applied Sciences and Technology (PAF-IAST), Mang, Haripur 22621, Khyber Pakhtunkhwa, Pakistan
| | - Rehan Ahmed
- School of Computing Sciences, Pak-Austria Fachhochschule Institute of Applied Sciences and Technology (PAF-IAST), Mang, Haripur 22621, Khyber Pakhtunkhwa, Pakistan
| | - Arshad Iqbal
- Sino-Pak Center for Artificial Intelligence (SPCAI), Pak-Austria Fachhochschule Institute of Applied Sciences and Technology, Mang, Haripur 22621, Khyber Pakhtunkhwa, Pakistan
| | - Naveed Ahmad
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Mohammed Ali Alshara
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
| |
Collapse
|
7
|
Li Y, Liang Z, Li Y, Cao Y, Zhang H, Dong B. Machine learning value in the diagnosis of vertebral fractures: A systematic review and meta-analysis. Eur J Radiol 2024; 181:111714. [PMID: 39241305 DOI: 10.1016/j.ejrad.2024.111714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 07/28/2024] [Accepted: 08/30/2024] [Indexed: 09/09/2024]
Abstract
PURPOSE To evaluate the diagnostic accuracy of machine learning (ML) in detecting vertebral fractures, considering varying fracture classifications, patient populations, and imaging approaches. METHOD A systematic review and meta-analysis were conducted by searching PubMed, Embase, Cochrane Library, and Web of Science up to December 31, 2023, for studies using ML for vertebral fracture diagnosis. Bias risk was assessed using QUADAS-2. A bivariate mixed-effects model was used for the meta-analysis. Meta-analyses were performed according to five task types (vertebral fractures, osteoporotic vertebral fractures, differentiation of benign and malignant vertebral fractures, differentiation of acute and chronic vertebral fractures, and prediction of vertebral fractures). Subgroup analyses were conducted by different ML models (including ML and DL) and modeling methods (including CT, X-ray, MRI, and clinical features). RESULTS Eighty-one studies were included. ML demonstrated a diagnostic sensitivity of 0.91 and specificity of 0.95 for vertebral fractures. Subgroup analysis showed that DL (SROC 0.98) and CT (SROC 0.98) performed best overall. For osteoporotic fractures, ML showed a sensitivity of 0.93 and specificity of 0.96, with DL (SROC 0.99) and X-ray (SROC 0.99) performing better. For differentiating benign from malignant fractures, ML achieved a sensitivity of 0.92 and specificity of 0.93, with DL (SROC 0.96) and MRI (SROC 0.97) performing best. For differentiating acute from chronic vertebral fractures, ML showed a sensitivity of 0.92 and specificity of 0.93, with ML (SROC 0.96) and CT (SROC 0.97) performing best. For predicting vertebral fractures, ML had a sensitivity of 0.76 and specificity of 0.87, with ML (SROC 0.80) and clinical features (SROC 0.86) performing better. CONCLUSIONS ML, especially DL models applied to CT, MRI, and X-ray, shows high diagnostic accuracy for vertebral fractures. ML also effectively predicts osteoporotic vertebral fractures, aiding in tailored prevention strategies. Further research and validation are required to confirm ML's clinical efficacy.
Collapse
Affiliation(s)
- Yue Li
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Zhuang Liang
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Yingchun Li
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Yang Cao
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Hui Zhang
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Bo Dong
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China.
| |
Collapse
|
8
|
Zhang JY, Yang JM, Wang XM, Wang HL, Zhou H, Yan ZN, Xie Y, Liu PR, Hao ZW, Ye ZW. Application and Prospects of Deep Learning Technology in Fracture Diagnosis. Curr Med Sci 2024; 44:1132-1140. [PMID: 39551854 DOI: 10.1007/s11596-024-2928-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 08/18/2024] [Indexed: 11/19/2024]
Abstract
Artificial intelligence (AI) is an interdisciplinary field that combines computer technology, mathematics, and several other fields. Recently, with the rapid development of machine learning (ML) and deep learning (DL), significant progress has been made in the field of AI. As one of the fastest-growing branches, DL can effectively extract features from big data and optimize the performance of various tasks. Moreover, with advancements in digital imaging technology, DL has become a key tool for processing high-dimensional medical image data and conducting medical image analysis in clinical applications. With the development of this technology, the diagnosis of orthopedic diseases has undergone significant changes. In this review, we describe recent research progress on DL in fracture diagnosis and discuss the value of DL in this field, providing a reference for better integration and development of DL technology in orthopedics.
Collapse
Affiliation(s)
- Jia-Yao Zhang
- Department of Orthopedics, Fuzhou University Affiliated Provincial Hospital, Fuzhou, 350013, China
- Department of Orthopedics, Fujian Provincial Hospital, Fuzhou, 350013, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Jia-Ming Yang
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Xin-Meng Wang
- Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences, Dali University, Dali, 671000, China
| | - Hong-Lin Wang
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Hong Zhou
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Zi-Neng Yan
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Yi Xie
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Peng-Ran Liu
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
| | - Zhi-Wei Hao
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Zhe-Wei Ye
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
| |
Collapse
|
9
|
Tian J, Wang K, Wu P, Li J, Zhang X, Wang X. Development of a deep learning model for detecting lumbar vertebral fractures on CT images: An external validation. Eur J Radiol 2024; 180:111685. [PMID: 39197270 DOI: 10.1016/j.ejrad.2024.111685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 05/31/2024] [Accepted: 08/14/2024] [Indexed: 09/01/2024]
Abstract
OBJECTIVE To develop and externally validate a binary classification model for lumbar vertebral body fractures based on CT images using deep learning methods. METHODS This study involved data collection from two hospitals for AI model training and external validation. In Cohort A from Hospital 1, CT images from 248 patients, comprising 1508 vertebrae, revealed that 20.9% had fractures (315 vertebrae) and 79.1% were non-fractured (1193 vertebrae). In Cohort B from Hospital 2, CT images from 148 patients, comprising 887 vertebrae, indicated that 14.8% had fractures (131 vertebrae) and 85.2% were non-fractured (756 vertebrae). The AI model for lumbar spine fractures underwent two stages: vertebral body segmentation and fracture classification. The first stage utilized a 3D V-Net convolutional deep neural network, which produced a 3D segmentation map. From this map, region of each vertebra body were extracted and then input into the second stage of the algorithm. The second stage employed a 3D ResNet convolutional deep neural network to classify each proposed region as positive (fractured) or negative (not fractured). RESULTS The AI model's accuracy for detecting vertebral fractures in Cohort A's training set (n = 1199), validation set (n = 157), and test set (n = 152) was 100.0 %, 96.2 %, and 97.4 %, respectively. For Cohort B (n = 148), the accuracy was 96.3 %. The area under the receiver operating characteristic curve (AUC-ROC) values for the training, validation, and test sets of Cohort A, as well as Cohort B, and their 95 % confidence intervals (CIs) were as follows: 1.000 (1.000, 1.000), 0.978 (0.944, 1.000), 0.986 (0.969, 1.000), and 0.981 (0.970, 0.992). The area under the precision-recall curve (AUC-PR) values were 1.000 (0.996, 1.000), 0.964 (0.927, 0.985), 0.907 (0.924, 0.984), and 0.890 (0.846, 0.971), respectively. According to the DeLong test, there was no significant difference in the AUC-ROC values between the test set of Cohort A and Cohort B, both for the overall data and for each specific vertebral location (all P>0.05). CONCLUSION The developed model demonstrates promising diagnostic accuracy and applicability for detecting lumbar vertebral fractures.
Collapse
Affiliation(s)
- Jingyi Tian
- Department of Radiology, Peking University First Hospital, Beijing, China; Department of Radiology, Beijing Water Conservancy Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jialun Li
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China.
| |
Collapse
|
10
|
Hu Z, Patel M, Ball RL, Lin HM, Prevedello LM, Naseri M, Mathur S, Moreland R, Wilson J, Witiw C, Yeom KW, Ha Q, Hanley D, Seferbekov S, Chen H, Singer P, Henkel C, Pfeiffer P, Pan I, Sheoran H, Li W, Flanders AE, Kitamura FC, Richards T, Talbott J, Sejdić E, Colak E. Assessing the Performance of Models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a Level I Trauma Center. Radiol Artif Intell 2024; 6:e230550. [PMID: 39298563 PMCID: PMC11605142 DOI: 10.1148/ryai.230550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 07/25/2024] [Accepted: 09/06/2024] [Indexed: 09/22/2024]
Abstract
Purpose To evaluate the performance of the top models from the RSNA 2022 Cervical Spine Fracture Detection challenge on a clinical test dataset of both noncontrast and contrast-enhanced CT scans acquired at a level I trauma center. Materials and Methods Seven top-performing models in the RSNA 2022 Cervical Spine Fracture Detection challenge were retrospectively evaluated on a clinical test set of 1828 CT scans (from 1829 series: 130 positive for fracture, 1699 negative for fracture; 1308 noncontrast, 521 contrast enhanced) from 1779 patients (mean age, 55.8 years ± 22.1 [SD]; 1154 [64.9%] male patients). Scans were acquired without exclusion criteria over 1 year (January-December 2022) from the emergency department of a neurosurgical and level I trauma center. Model performance was assessed using area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. False-positive and false-negative cases were further analyzed by a neuroradiologist. Results Although all seven models showed decreased performance on the clinical test set compared with the challenge dataset, the models maintained high performances. On noncontrast CT scans, the models achieved a mean AUC of 0.89 (range: 0.79-0.92), sensitivity of 67.0% (range: 30.9%-80.0%), and specificity of 92.9% (range: 82.1%-99.0%). On contrast-enhanced CT scans, the models had a mean AUC of 0.88 (range: 0.76-0.94), sensitivity of 81.9% (range: 42.7%-100.0%), and specificity of 72.1% (range: 16.4%-92.8%). The models identified 10 fractures missed by radiologists. False-positive cases were more common in contrast-enhanced scans and observed in patients with degenerative changes on noncontrast scans, while false-negative cases were often associated with degenerative changes and osteopenia. Conclusion The winning models from the 2022 RSNA AI Challenge demonstrated a high performance for cervical spine fracture detection on a clinical test dataset, warranting further evaluation for their use as clinical support tools. Keywords: Feature Detection, Supervised Learning, Convolutional Neural Network (CNN), Genetic Algorithms, CT, Spine, Technology Assessment, Head/Neck Supplemental material is available for this article. © RSNA, 2024 See also commentary by Levi and Politi in this issue.
Collapse
Affiliation(s)
| | | | - Robyn L. Ball
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Hui Ming Lin
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Luciano M. Prevedello
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Mitra Naseri
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Shobhit Mathur
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Robert Moreland
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Jefferson Wilson
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Christopher Witiw
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Kristen W. Yeom
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Qishen Ha
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Darragh Hanley
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Selim Seferbekov
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Hao Chen
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Philipp Singer
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Christof Henkel
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Pascal Pfeiffer
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Ian Pan
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Harshit Sheoran
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Wuqi Li
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Adam E. Flanders
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Felipe C. Kitamura
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Tyler Richards
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | - Jason Talbott
- From the Edward S. Rogers Department of Electrical and Computer Engineering (Z.H., W.L., E.S.), Department of Medical Imaging, Faculty of Medicine (M.P., S.M., R.M., E.C.), Faculty of Medicine (M.N., J.W., C.W.), and Division of Neurosurgery, Department of Surgery (J.W., C.W.), University of Toronto, 40 St George St, Toronto, ON, Canada M5S 3G4; Department of Medical Imaging (H.M.L., M.N., S.M., R.M., E.C.) and Li Ka Shing Knowledge Institute (S.M., J.W., C.W., E.C.), St Michael’s Hospital, Unity Health Toronto, Toronto, Canada; The Jackson Laboratory, Bar Harbor, Maine (R.L.B.); Standard School of Medicine, Stanford University, Stanford, Calif (K.W.Y.); H2O.ai, Mountain View, Calif (Q.H., P.S., P.P.); School of Computer Science, University of Birmingham, Birmingham, UK (H.C.); DoubleYard, Edulab Group, Boston, Ireland (D.H.); Mapbox, London, UK (S.S.); NVIDIA, Santa Clara, Calif (C.H.); Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Mass (I.P.); University of London, Goldsmiths, London, UK (H.S.); Department of Radiology, The Ohio State University, Columbus, Ohio (L.M.P.); Department of Radiology, Division of Neuroradiology, Thomas Jefferson University, Philadelphia, Pa (A.E.F.); Universidade Federal de São Paulo (Unifesp), São Paulo, Brazil (F.C.K.); Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (J.T.); Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah (T.R.); and North York General Hospital, Toronto, Canada (E.S.)
| | | | | |
Collapse
|
11
|
Yıldız Potter İ, Rodriguez EK, Wu J, Nazarian A, Vaziri A. An Automated Vertebrae Localization, Segmentation, and Osteoporotic Compression Fracture Detection Pipeline for Computed Tomographic Imaging. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2428-2443. [PMID: 38717516 PMCID: PMC11522205 DOI: 10.1007/s10278-024-01135-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/30/2024] [Accepted: 05/01/2024] [Indexed: 06/29/2024]
Abstract
Osteoporosis is the most common chronic metabolic bone disease worldwide. Vertebral compression fracture (VCF) is the most common type of osteoporotic fracture. Approximately 700,000 osteoporotic VCFs are diagnosed annually in the USA alone, resulting in an annual economic burden of ~$13.8B. With an aging population, the rate of osteoporotic VCFs and their associated burdens are expected to rise. Those burdens include pain, functional impairment, and increased medical expenditure. Therefore, it is of utmost importance to develop an analytical tool to aid in the identification of VCFs. Computed Tomography (CT) imaging is commonly used to detect occult injuries. Unlike the existing VCF detection approaches based on CT, the standard clinical criteria for determining VCF relies on the shape of vertebrae, such as loss of vertebral body height. We developed a novel automated vertebrae localization, segmentation, and osteoporotic VCF detection pipeline for CT scans using state-of-the-art deep learning models to bridge this gap. To do so, we employed a publicly available dataset of spine CT scans with 325 scans annotated for segmentation, 126 of which also graded for VCF (81 with VCFs and 45 without VCFs). Our approach attained 96% sensitivity and 81% specificity in detecting VCF at the vertebral-level, and 100% accuracy at the subject-level, outperforming deep learning counterparts tested for VCF detection without segmentation. Crucially, we showed that adding predicted vertebrae segments as inputs significantly improved VCF detection at both vertebral and subject levels by up to 14% Sensitivity and 20% Specificity (p-value = 0.028).
Collapse
Affiliation(s)
| | - Edward K Rodriguez
- Carl J. Shapiro Department of Orthopedic Surgery, Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, RN123, Boston, MA, 02215, USA
| | - Jim Wu
- Department of Radiology, Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Shapiro 4, Boston, MA, 02215, USA
| | - Ara Nazarian
- Carl J. Shapiro Department of Orthopedic Surgery, Beth Israel Deaconess Medical Center (BIDMC), Harvard Medical School, 330 Brookline Avenue, Stoneman 10, Boston, MA, 02215, USA
- Musculoskeletal Translational Innovation Initiative, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, RN123, Boston, MA, 02215, USA
- Department of Orthopaedics Surgery, Yerevan State University, Yerevan, Armenia
| | - Ashkan Vaziri
- BioSensics, LLC, 57 Chapel Street, Newton, MA, 02458, USA
| |
Collapse
|
12
|
Xie H, Gu C, Zhang W, Zhu J, He J, Huang Z, Zhu J, Xu Z. A few-shot learning framework for the diagnosis of osteopenia and osteoporosis using knee X-ray images. J Int Med Res 2024; 52:3000605241274576. [PMID: 39225007 PMCID: PMC11375658 DOI: 10.1177/03000605241274576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
OBJECTIVE We developed a few-shot learning (FSL) framework for the diagnosis of osteopenia and osteoporosis in knee X-ray images. METHODS Computer vision models containing deep convolutional neural networks were fine-tuned to enable generalization from natural images (ImageNet) to chest X-ray images (normal vs. pneumonia, base images). Then, a series of automated machine learning classifiers based on the Euclidean distances of base images were developed to make predictions for novel images (normal vs. osteopenia vs. osteoporosis). The performance of the FSL framework was compared with that of junior and senior radiologists. In addition, the gradient-weighted class activation mapping algorithm was used for visual interpretation. RESULTS In Cohort #1, the mean accuracy (0.728) and sensitivity (0.774) of the FSL models were higher than those of the radiologists (0.512 and 0.448). A diagnostic pipeline of FSL model (first)-radiologists (second) achieved better performance (0.653 accuracy, 0.582 sensitivity, and 0.816 specificity) than radiologists alone. In Cohort #2, the diagnostic pipeline also showed improved performance. CONCLUSIONS The FSL framework yielded practical performance with respect to the diagnosis of osteopenia and osteoporosis in comparison with radiologists. This retrospective study supports the use of promising FSL methods in computer-aided diagnosis tasks involving limited samples.
Collapse
Affiliation(s)
- Hua Xie
- Department of Orthopedics, Jintan Hospital Affiliated to Jiangsu University, Changzhou, China
| | - Chenqi Gu
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Wenchao Zhang
- Department of Orthopedics, Jintan Hospital Affiliated to Jiangsu University, Changzhou, China
| | - Jiacheng Zhu
- Department of Orthopedics, Jintan Hospital Affiliated to Jiangsu University, Changzhou, China
| | - Jin He
- Department of Orthopedics, Jintan Hospital Affiliated to Jiangsu University, Changzhou, China
| | - Zhou Huang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhonghua Xu
- Department of Orthopedics, Jintan Hospital Affiliated to Jiangsu University, Changzhou, China
| |
Collapse
|
13
|
Liawrungrueang W, Cho ST, Kotheeranurak V, Jitpakdee K, Kim P, Sarasombath P. Osteoporotic vertebral compression fracture (OVCF) detection using artificial neural networks model based on the AO spine-DGOU osteoporotic fracture classification system. NORTH AMERICAN SPINE SOCIETY JOURNAL 2024; 19:100515. [PMID: 39188670 PMCID: PMC11345903 DOI: 10.1016/j.xnsj.2024.100515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/26/2024] [Accepted: 06/27/2024] [Indexed: 08/28/2024]
Abstract
Background Osteoporotic Vertebral Compression Fracture (OVCF) substantially reduces a person's health-related quality of life. Computer Tomography (CT) scan is currently the standard for diagnosis of OVCF. The aim of this paper was to evaluate the OVCF detection potential of artificial neural networks (ANN). Methods Models of artificial intelligence based on deep learning hold promise for quickly and automatically identifying and visualizing OVCF. This study investigated the detection, classification, and grading of OVCF using deep artificial neural networks (ANN). Techniques: Annotation techniques were used to segregate the sagittal images of 1,050 OVCF CT pictures with symptomatic low back pain into 934 CT images for a training dataset (89%) and 116 CT images for a test dataset (11%). A radiologist tagged, cleaned, and annotated the training dataset. Disc deterioration was assessed in all lumbar discs using the AO Spine-DGOU Osteoporotic Fracture Classification System. The detection and grading of OVCF were trained using the deep learning ANN model. By putting an automatic model to the test for dataset grading, the outcomes of the ANN model training were confirmed. Results The sagittal lumbar CT training dataset included 5,010 OVCF from OF1, 1942 from OF2, 522 from OF3, 336 from OF4, and none from OF5. With overall 96.04% accuracy, the deep ANN model was able to identify and categorize lumbar OVCF. Conclusions The ANN model offers a rapid and effective way to classify lumbar OVCF by automatically and consistently evaluating routine CT scans using AO Spine-DGOU osteoporotic fracture classification system.
Collapse
Affiliation(s)
| | - Sung Tan Cho
- Department of Orthopaedic Surgery, Seoul Seonam Hospital, South Korea
| | - Vit Kotheeranurak
- Department of Orthopaedics, Faculty of Medicine, Chulalongkorn University, and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
- Center of Excellence in Biomechanics and Innovative Spine Surgery, Chulalongkorn University, Bangkok, Thailand
| | - Khanathip Jitpakdee
- Department of Orthopedics, Queen Savang Vadhana Memorial Hospital, Sriracha, Chonburi, Thailand
| | - Pyeoungkee Kim
- Department of Computer Engineering, Silla University, Busan, South Korea
| | - Peem Sarasombath
- Department of Orthopaedics, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| |
Collapse
|
14
|
Kutbi M. Artificial Intelligence-Based Applications for Bone Fracture Detection Using Medical Images: A Systematic Review. Diagnostics (Basel) 2024; 14:1879. [PMID: 39272664 PMCID: PMC11394268 DOI: 10.3390/diagnostics14171879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 08/19/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
Artificial intelligence (AI) is making notable advancements in the medical field, particularly in bone fracture detection. This systematic review compiles and assesses existing research on AI applications aimed at identifying bone fractures through medical imaging, encompassing studies from 2010 to 2023. It evaluates the performance of various AI models, such as convolutional neural networks (CNNs), in diagnosing bone fractures, highlighting their superior accuracy, sensitivity, and specificity compared to traditional diagnostic methods. Furthermore, the review explores the integration of advanced imaging techniques like 3D CT and MRI with AI algorithms, which has led to enhanced diagnostic accuracy and improved patient outcomes. The potential of Generative AI and Large Language Models (LLMs), such as OpenAI's GPT, to enhance diagnostic processes through synthetic data generation, comprehensive report creation, and clinical scenario simulation is also discussed. The review underscores the transformative impact of AI on diagnostic workflows and patient care, while also identifying research gaps and suggesting future research directions to enhance data quality, model robustness, and ethical considerations.
Collapse
Affiliation(s)
- Mohammed Kutbi
- College of Computing and Informatics, Saudi Electronic University, Riyadh 13316, Saudi Arabia
| |
Collapse
|
15
|
Zhu X, Liu D, Liu L, Guo J, Li Z, Zhao Y, Wu T, Liu K, Liu X, Pan X, Qi L, Zhang Y, Cheng L, Chen B. Fully Automatic Deep Learning Model for Spine Refracture in Patients with OVCF: A Multi-Center Study. Orthop Surg 2024; 16:2052-2065. [PMID: 38952050 PMCID: PMC11293932 DOI: 10.1111/os.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 06/06/2024] [Accepted: 06/09/2024] [Indexed: 07/03/2024] Open
Abstract
BACKGROUND The reaserch of artificial intelligence (AI) model for predicting spinal refracture is limited to bone mineral density, X-ray and some conventional laboratory indicators, which has its own limitations. Besides, it lacks specific indicators related to osteoporosis and imaging factors that can better reflect bone quality, such as computed tomography (CT). OBJECTIVE To construct a novel predicting model based on bone turn-over markers and CT to identify patients who were more inclined to suffer spine refracture. METHODS CT images and clinical information of 383 patients (training set = 240 cases of osteoporotic vertebral compression fractures (OVCF), validation set = 63, test set = 80) were retrospectively collected from January 2015 to October 2022 at three medical centers. The U-net model was adopted to automatically segment ROI. Three-dimensional (3D) cropping of all spine regions was used to achieve the final ROI regions including 3D_Full and 3D_RoiOnly. We used the Densenet 121-3D model to model the cropped region and simultaneously build a T-NIPT prediction model. Diagnostics of deep learning models were assessed by constructing ROC curves. We generated calibration curves to assess the calibration performance. Additionally, decision curve analysis (DCA) was used to assess the clinical utility of the predictive models. RESULTS The performance of the test model is comparable to its performance on the training set (dice coefficients of 0.798, an mIOU of 0.755, an SA of 0.767, and an OS of 0.017). Univariable and multivariable analysis indicate that T_P1NT was an independent risk factor for refracture. The performance of predicting refractures in different ROI regions showed that 3D_Full model exhibits the highest calibration performance, with a Hosmer-Lemeshow goodness-of-fit (HL) test statistic exceeding 0.05. The analysis of the training and test sets showed that the 3D_Full model, which integrates clinical and deep learning results, demonstrated superior performance with significant improvement (p-value < 0.05) compared to using clinical features independently or using only 3D_RoiOnly. CONCLUSION T_P1NT was an independent risk factor of refracture. Our 3D-FULL model showed better performance in predicting high-risk population of spine refracture than other models and junior doctors do. This model can be applicable to real-world translation due to its automatic segmentation and detection.
Collapse
Affiliation(s)
- Xuetao Zhu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Dejian Liu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Lian Liu
- Department of Emergency SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Jingxuan Guo
- Department of anesthesiologyAffiliated Hospital of Shandong University of Traditional Chinese MedicineJinanChina
| | - Zedi Li
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Yixiang Zhao
- Department of Orthopaedic SurgeryYantaishan HospitalYantaiChina
| | - Tianhao Wu
- Department of Hepatopancreatobiliary SurgeryGraduate School of Dalian Medical UniversityDalianChina
| | - Kaiwen Liu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Xinyu Liu
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Xin Pan
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Lei Qi
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Yuanqiang Zhang
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Lei Cheng
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| | - Bin Chen
- Department of Orthopaedic SurgeryQilu Hospital of Shandong University, Cheeloo College of Medicine of Shandong UniversityJinanP. R. China
| |
Collapse
|
16
|
Choi E, Park D, Son G, Bak S, Eo T, Youn D, Hwang D. Weakly supervised deep learning for diagnosis of multiple vertebral compression fractures in CT. Eur Radiol 2024; 34:3750-3760. [PMID: 37973631 DOI: 10.1007/s00330-023-10394-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 08/08/2023] [Accepted: 09/11/2023] [Indexed: 11/19/2023]
Abstract
OBJECTIVE This study aims to develop a weakly supervised deep learning (DL) model for vertebral-level vertebral compression fracture (VCF) classification using image-level labelled data. METHODS The training set included 815 patients with normal (n = 507, 62%) or VCFs (n = 308, 38%). Our proposed model was trained on image-level labelled data for vertebral-level classification. Another supervised DL model was trained with vertebral-level labelled data to compare the performance of the proposed model. RESULTS The test set included 227 patients with normal (n = 117, 52%) or VCFs (n = 110, 48%). For a fair comparison of the two models, we compared sensitivities with the same specificities of the proposed model and the vertebral-level supervised model. The specificity for overall L1-L5 performance was 0.981. The proposed model may outperform the vertebral-level supervised model with sensitivities of 0.770 vs 0.705 (p = 0.080), respectively. For vertebral-level analysis, the specificities for each L1-L5 were 0.974, 0.973, 0.970, 0.991, and 0.995, respectively. The proposed model yielded the same or better sensitivity than the vertebral-level supervised model in L1 (0.750 vs 0.694, p = 0.480), L3 (0.793 vs 0.586, p < 0.05), L4 (0.833 vs 0.667, p = 0.480), and L5 (0.600 vs 0.600, p = 1.000), respectively. The proposed model showed lower sensitivity than the vertebral-level supervised model for L2, but there was no significant difference (0.775 vs 0.825, p = 0.617). CONCLUSIONS The proposed model may have a comparable or better performance than the supervised model in vertebral-level VCF classification. CLINICAL RELEVANCE STATEMENT Vertebral-level vertebral compression fracture classification aids in devising patient-specific treatment plans by identifying the precise vertebrae affected by compression fractures. KEY POINTS • Our proposed weakly supervised method may have comparable or better performance than the supervised method for vertebral-level vertebral compression fracture classification. • The weakly supervised model could have classified cases with multiple vertebral compression fractures at the vertebral-level, even if the model was trained with image-level labels. • Our proposed method could help reduce radiologists' labour because it enables vertebral-level classification from image-level labels.
Collapse
Affiliation(s)
- Euijoon Choi
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| | - Doohyun Park
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Geonhui Son
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | - Taejoon Eo
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Daemyung Youn
- School of Management of Technology, Yonsei University, Seoul, Republic of Korea
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-Ro 14-Gil, Seongbuk-Gu, Seoul, 02792, Republic of Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea.
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
17
|
Nadeem SA, Comellas AP, Regan EA, Hoffman EA, Saha PK. Chest CT-based automated vertebral fracture assessment using artificial intelligence and morphologic features. Med Phys 2024; 51:4201-4218. [PMID: 38721977 PMCID: PMC11661457 DOI: 10.1002/mp.17072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 04/02/2024] [Accepted: 04/02/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Spinal degeneration and vertebral compression fractures are common among the elderly that adversely affect their mobility, quality of life, lung function, and mortality. Assessment of vertebral fractures in chronic obstructive pulmonary disease (COPD) is important due to the high prevalence of osteoporosis and associated vertebral fractures in COPD. PURPOSE We present new automated methods for (1) segmentation and labelling of individual vertebrae in chest computed tomography (CT) images using deep learning (DL), multi-parametric freeze-and-grow (FG) algorithm, and separation of apparently fused vertebrae using intensity autocorrelation and (2) vertebral deformity fracture detection using computed vertebral height features and parametric computational modelling of an established protocol outlined for trained human experts. METHODS A chest CT-based automated method was developed for quantitative deformity fracture assessment following the protocol by Genant et al. The computational method was accomplished in the following steps: (1) computation of a voxel-level vertebral body likelihood map from chest CT using a trained DL network; (2) delineation and labelling of individual vertebrae on the likelihood map using an iterative multi-parametric FG algorithm; (3) separation of apparently fused vertebrae in CT using intensity autocorrelation; (4) computation of vertebral heights using contour analysis on the central anterior-posterior (AP) plane of a vertebral body; (5) assessment of vertebral fracture status using ratio functions of vertebral heights and optimized thresholds. The method was applied to inspiratory or total lung capacity (TLC) chest scans from the multi-site Genetic Epidemiology of COPD (COPDGene) (ClinicalTrials.gov: NCT00608764) study, and the performance was examined (n = 3231). One hundred and twenty scans randomly selected from this dataset were partitioned into training (n = 80) and validation (n = 40) datasets for the DL-based vertebral body classifier. Also, generalizability of the method to low dose CT imaging (n = 236) was evaluated. RESULTS The vertebral segmentation module achieved a Dice score of .984 as compared to manual outlining results as reference (n = 100); the segmentation performance was consistent across images with the minimum and maximum of Dice scores among images being .980 and .989, respectively. The vertebral labelling module achieved 100% accuracy (n = 100). For low dose CT, the segmentation module produced image-level minimum and maximum Dice scores of .995 and .999, respectively, as compared to standard dose CT as the reference; vertebral labelling at low dose CT was fully consistent with standard dose CT (n = 236). The fracture assessment method achieved overall accuracy, sensitivity, and specificity of 98.3%, 94.8%, and 98.5%, respectively, for 40,050 vertebrae from 3231 COPDGene participants. For generalizability experiments, fracture assessment from low dose CT was consistent with the reference standard dose CT results across all participants. CONCLUSIONS Our CT-based automated method for vertebral fracture assessment is accurate, and it offers a feasible alternative to manual expert reading, especially for large population-based studies, where automation is important for high efficiency. Generalizability of the method to low dose CT imaging further extends the scope of application of the method, particularly since the usage of low dose CT imaging in large population-based studies has increased to reduce cumulative radiation exposure.
Collapse
Affiliation(s)
- Syed Ahmed Nadeem
- Department of Radiology, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
| | - Alejandro P Comellas
- Department of Internal Medicine, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
| | - Elizabeth A Regan
- Department of Epidemiology, Colorado School of Public Health, University of Colorado, Aurora, Colorado, USA
- Division of Rheumatology, National Jewish Health, Denver, Colorado, USA
| | - Eric A Hoffman
- Department of Radiology, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
- Department of Internal Medicine, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
- Department of Biomedical Engineering, College of Engineering, The University of Iowa, Iowa City, Iowa, USA
| | - Punam K Saha
- Department of Radiology, Carver College of Medicine, The University of Iowa, Iowa City, Iowa, USA
- Department of Electrical and Computer Engineering, College of Engineering, The University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
18
|
Kim YR, Yoon YS, Cha JG. Opportunistic Screening for Acute Vertebral Fractures on a Routine Abdominal or Chest Computed Tomography Scans Using an Automated Deep Learning Model. Diagnostics (Basel) 2024; 14:781. [PMID: 38611694 PMCID: PMC11011775 DOI: 10.3390/diagnostics14070781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 03/31/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024] Open
Abstract
OBJECTIVES To develop an opportunistic screening model based on a deep learning algorithm to detect recent vertebral fractures in abdominal or chest CTs. MATERIALS AND METHODS A total of 1309 coronal reformatted images (504 with a recent fracture from 119 patients, and 805 without fracture from 115 patients), from torso CTs, performed from September 2018 to April 2022, on patients who also had a spine MRI within two months, were included. Two readers participated in image selection and manually labeled the fractured segment on each selected image with Neuro-T (version 2.3.3; Neurocle Inc.) software. We split the images randomly into the training and internal test set (labeled: unlabeled = 480:700) and the secondary interval validation set (24:105). For the observer study, three radiologists reviewed the CT images in the external test set with and without deep learning assistance and scored the likelihood of an acute fracture in each image independently. RESULTS For the training and internal test sets, the AI achieved a 99.86% test accuracy, 91.22% precision, and 89.18% F1 score for detection of recent fracture. Then, in the secondary internal validation set, it achieved 99.90%, 74.93%, and 78.30%, respectively. In the observer study, with the assistance of the deep learning algorithm, a significant improvement was observed in the radiology resident's accuracy, from 92.79% to 98.2% (p = 0.04). CONCLUSION The model showed a high level of accuracy in the test set and also the internal validation set. If this algorithm is applied opportunistically to daily torso CT evaluation, it will be helpful for the early detection of fractures that require treatment.
Collapse
Affiliation(s)
- Ye Rin Kim
- Department of Radiology, College of Medicine, Soonchunhyang University Bucheon Hospital, Soonchunhyang University, Bucheon 14584, Republic of Korea
| | - Yu Sung Yoon
- Department of Radiology, School of Medicine, Kyungpook National University Hospital, Kyungpook National University, Daegu 41944, Republic of Korea
| | - Jang Gyu Cha
- Department of Radiology, College of Medicine, Soonchunhyang University Bucheon Hospital, Soonchunhyang University, Bucheon 14584, Republic of Korea
| |
Collapse
|
19
|
He Y, Lin J, Zhu S, Zhu J, Xu Z. Deep learning in the radiologic diagnosis of osteoporosis: a literature review. J Int Med Res 2024; 52:3000605241244754. [PMID: 38656208 PMCID: PMC11044779 DOI: 10.1177/03000605241244754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 02/26/2024] [Indexed: 04/26/2024] Open
Abstract
OBJECTIVE Osteoporosis is a systemic bone disease characterized by low bone mass, damaged bone microstructure, increased bone fragility, and susceptibility to fractures. With the rapid development of artificial intelligence, a series of studies have reported deep learning applications in the screening and diagnosis of osteoporosis. The aim of this review was to summary the application of deep learning methods in the radiologic diagnosis of osteoporosis. METHODS We conducted a two-step literature search using the PubMed and Web of Science databases. In this review, we focused on routine radiologic methods, such as X-ray, computed tomography, and magnetic resonance imaging, used to opportunistically screen for osteoporosis. RESULTS A total of 40 studies were included in this review. These studies were divided into three categories: osteoporosis screening (n = 20), bone mineral density prediction (n = 13), and osteoporotic fracture risk prediction and detection (n = 7). CONCLUSIONS Deep learning has demonstrated a remarkable capacity for osteoporosis screening. However, clinical commercialization of a diagnostic model for osteoporosis remains a challenge.
Collapse
Affiliation(s)
- Yu He
- Suzhou Medical College, Soochow University, Suzhou, Jiangsu, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Zhonghua Xu
- Department of Orthopedics, Jintan Affiliated Hospital to Jiangsu University, Changzhou, China
| |
Collapse
|
20
|
Bhatnagar A, Kekatpure AL, Velagala VR, Kekatpure A. A Review on the Use of Artificial Intelligence in Fracture Detection. Cureus 2024; 16:e58364. [PMID: 38756254 PMCID: PMC11097122 DOI: 10.7759/cureus.58364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Artificial intelligence (AI) simulates intelligent behavior using computers with minimum human intervention. Recent advances in AI, especially deep learning, have made significant progress in perceptual operations, enabling computers to convey and comprehend complicated input more accurately. Worldwide, fractures affect people of all ages and in all regions of the planet. One of the most prevalent causes of inaccurate diagnosis and medical lawsuits is overlooked fractures on radiographs taken in the emergency room, which can range from 2% to 9%. The workforce will soon be under a great deal of strain due to the growing demand for fracture detection on multiple imaging modalities. A dearth of radiologists worsens this rise in demand as a result of a delay in hiring and a significant percentage of radiologists close to retirement. Additionally, the process of interpreting diagnostic images can sometimes be challenging and tedious. Integrating orthopedic radio-diagnosis with AI presents a promising solution to these problems. There has recently been a noticeable rise in the application of deep learning techniques, namely convolutional neural networks (CNNs), in medical imaging. In the field of orthopedic trauma, CNNs are being documented to operate at the proficiency of expert orthopedic surgeons and radiologists in the identification and categorization of fractures. CNNs can analyze vast amounts of data at a rate that surpasses that of human observations. In this review, we discuss the use of deep learning methods in fracture detection and classification, the integration of AI with various imaging modalities, and the benefits and disadvantages of integrating AI with radio-diagnostics.
Collapse
Affiliation(s)
- Aayushi Bhatnagar
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aditya L Kekatpure
- Orthopedic Surgery, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Vivek R Velagala
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aashay Kekatpure
- Orthopedic Surgery, Narendra Kumar Prasadrao Salve Institute of Medical Sciences and Research, Nagpur, IND
| |
Collapse
|
21
|
Tieu A, Kroen E, Kadish Y, Liu Z, Patel N, Zhou A, Yilmaz A, Lee S, Deyer T. The Role of Artificial Intelligence in the Identification and Evaluation of Bone Fractures. Bioengineering (Basel) 2024; 11:338. [PMID: 38671760 PMCID: PMC11047896 DOI: 10.3390/bioengineering11040338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 03/23/2024] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
Artificial intelligence (AI), particularly deep learning, has made enormous strides in medical imaging analysis. In the field of musculoskeletal radiology, deep-learning models are actively being developed for the identification and evaluation of bone fractures. These methods provide numerous benefits to radiologists such as increased diagnostic accuracy and efficiency while also achieving standalone performances comparable or superior to clinician readers. Various algorithms are already commercially available for integration into clinical workflows, with the potential to improve healthcare delivery and shape the future practice of radiology. In this systematic review, we explore the performance of current AI methods in the identification and evaluation of fractures, particularly those in the ankle, wrist, hip, and ribs. We also discuss current commercially available products for fracture detection and provide an overview of the current limitations of this technology and future directions of the field.
Collapse
Affiliation(s)
- Andrew Tieu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Ezriel Kroen
- New York Medical College, Valhalla, NY 10595, USA
| | | | - Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Nikhil Patel
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Alexander Zhou
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | | | | | - Timothy Deyer
- East River Medical Imaging, New York, NY 10021, USA
- Department of Radiology, Cornell Medicine, New York, NY 10021, USA
| |
Collapse
|
22
|
Foreman SC, Schinz D, El Husseini M, Goller SS, Weißinger J, Dietrich AS, Renz M, Metz MC, Feuerriegel GC, Wiestler B, Stahl R, Schwaiger BJ, Makowski MR, Kirschke JS, Gersing AS. Deep Learning to Differentiate Benign and Malignant Vertebral Fractures at Multidetector CT. Radiology 2024; 310:e231429. [PMID: 38530172 DOI: 10.1148/radiol.231429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Background Differentiating between benign and malignant vertebral fractures poses diagnostic challenges. Purpose To investigate the reliability of CT-based deep learning models to differentiate between benign and malignant vertebral fractures. Materials and Methods CT scans acquired in patients with benign or malignant vertebral fractures from June 2005 to December 2022 at two university hospitals were retrospectively identified based on a composite reference standard that included histopathologic and radiologic information. An internal test set was randomly selected, and an external test set was obtained from an additional hospital. Models used a three-dimensional U-Net encoder-classifier architecture and applied data augmentation during training. Performance was evaluated using the area under the receiver operating characteristic curve (AUC) and compared with that of two residents and one fellowship-trained radiologist using the DeLong test. Results The training set included 381 patients (mean age, 69.9 years ± 11.4 [SD]; 193 male) with 1307 vertebrae (378 benign fractures, 447 malignant fractures, 482 malignant lesions). Internal and external test sets included 86 (mean age, 66.9 years ± 12; 45 male) and 65 (mean age, 68.8 years ± 12.5; 39 female) patients, respectively. The better-performing model of two training approaches achieved AUCs of 0.85 (95% CI: 0.77, 0.92) in the internal and 0.75 (95% CI: 0.64, 0.85) in the external test sets. Including an uncertainty category further improved performance to AUCs of 0.91 (95% CI: 0.83, 0.97) in the internal test set and 0.76 (95% CI: 0.64, 0.88) in the external test set. The AUC values of residents were lower than that of the best-performing model in the internal test set (AUC, 0.69 [95% CI: 0.59, 0.78] and 0.71 [95% CI: 0.61, 0.80]) and external test set (AUC, 0.70 [95% CI: 0.58, 0.80] and 0.71 [95% CI: 0.60, 0.82]), with significant differences only for the internal test set (P < .001). The AUCs of the fellowship-trained radiologist were similar to those of the best-performing model (internal test set, 0.86 [95% CI: 0.78, 0.93; P = .39]; external test set, 0.71 [95% CI: 0.60, 0.82; P = .46]). Conclusion Developed models showed a high discriminatory power to differentiate between benign and malignant vertebral fractures, surpassing or matching the performance of radiology residents and matching that of a fellowship-trained radiologist. © RSNA, 2024 See also the editorial by Booz and D'Angelo in this issue.
Collapse
Affiliation(s)
- Sarah C Foreman
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - David Schinz
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Malek El Husseini
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Sophia S Goller
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Jürgen Weißinger
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Anna-Sophia Dietrich
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Martin Renz
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Marie-Christin Metz
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Georg C Feuerriegel
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Benedikt Wiestler
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Robert Stahl
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Benedikt J Schwaiger
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Marcus R Makowski
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Jan S Kirschke
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| | - Alexandra S Gersing
- From the Departments of Radiology (S.C.F., A.S.D., G.C.F., M.R.M.) and Neuroradiology (D.S., M.E.H., M.R., M.C.M., B.W., B.J.S., J.S.K.), Klinikum Rechts der Isar, Technische Universität München, Ismaninger Strasse 22, 81675 Munich, Germany; Departments of Radiology (S.S.G., J.W.) and Neuroradiology (R.S., A.S.G.), University Hospital Munich (LMU), Munich, Germany; and German Cancer Consortium (DKTK), Partner Site Munich, and German Cancer Research Center (DKFZ), Heidelberg, Germany (B.W.)
| |
Collapse
|
23
|
Wang YN, Liu G, Wang L, Chen C, Wang Z, Zhu S, Wan WT, Weng YZ, Lu WW, Li ZY, Wang Z, Ma XL, Yang Q. A Deep-Learning Model for Diagnosing Fresh Vertebral Fractures on Magnetic Resonance Images. World Neurosurg 2024; 183:e818-e824. [PMID: 38218442 DOI: 10.1016/j.wneu.2024.01.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Accepted: 01/07/2024] [Indexed: 01/15/2024]
Abstract
BACKGROUND The accurate diagnosis of fresh vertebral fractures (VFs) was critical to optimizing treatment outcomes. Existing studies, however, demonstrated insufficient accuracy, sensitivity, and specificity in detecting fresh fractures using magnetic resonance imaging (MRI), and fall short in localizing the fracture sites. METHODS This prospective study comprised 716 patients with fresh VFs. We obtained 849 Short TI Inversion Recovery (STIR) image slices for training and validation of the AI model. The AI models employed were yolov7 and resnet50, to detect fresh VFs. RESULTS The AI model demonstrated a diagnostic accuracy of 97.6% for fresh VFs, with a sensitivity of 98% and a specificity of 97%. The performance of the model displayed a high degree of consistency when compared to the evaluations by spine surgeons. In the external testing dataset, the model exhibited a classification accuracy of 92.4%, a sensitivity of 93%, and a specificity of 92%. CONCLUSIONS Our findings highlighted the potential of AI in diagnosing fresh VFs, offering an accurate and efficient way to aid physicians with diagnosis and treatment decisions.
Collapse
Affiliation(s)
- Yan-Ni Wang
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Gang Liu
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Lei Wang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences & Biomedical Engineering, Hebei University of Technology, Tianjin, China
| | - Chao Chen
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Zhi Wang
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Shan Zhu
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Wen-Tao Wan
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Yuan-Zhi Weng
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China; Department of Orthopaedics and Traumatology, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China; Research Center for Human Tissue and Organs Degeneration, Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Weijia William Lu
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong SAR, China; Department of Orthopaedics and Traumatology, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China; Research Center for Human Tissue and Organs Degeneration, Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Zhao-Yang Li
- Tianjin Key Laboratory of Composite and Functional Materials, School of Materials Science and Engineering, Tianjin University, Tianjin, China
| | - Zheng Wang
- Department of Orthopaedics, Chinese People's Liberation Army General Hospital, Beijing, China
| | - Xin-Long Ma
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China
| | - Qiang Yang
- Department of Spine Surgery, Tianjin Hospital, Tianjin University, Tianjin, China.
| |
Collapse
|
24
|
Bharadwaj UU, Chin CT, Majumdar S. Practical Applications of Artificial Intelligence in Spine Imaging: A Review. Radiol Clin North Am 2024; 62:355-370. [PMID: 38272627 DOI: 10.1016/j.rcl.2023.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI), a transformative technology with unprecedented potential in medical imaging, can be applied to various spinal pathologies. AI-based approaches may improve imaging efficiency, diagnostic accuracy, and interpretation, which is essential for positive patient outcomes. This review explores AI algorithms, techniques, and applications in spine imaging, highlighting diagnostic impact and challenges with future directions for integrating AI into spine imaging workflow.
Collapse
Affiliation(s)
- Upasana Upadhyay Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| | - Cynthia T Chin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Avenue, Box 0628, San Francisco, CA 94143, USA.
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| |
Collapse
|
25
|
Nguyen HG, Nguyen HT, Nguyen LT, Tran TS, Ho-Pham LT, Ling SH, Nguyen TV. Development of a shape-based algorithm for identification of asymptomatic vertebral compression fractures: A proof-of-principle study. Osteoporos Sarcopenia 2024; 10:22-27. [PMID: 38690543 PMCID: PMC11056464 DOI: 10.1016/j.afos.2024.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 11/25/2023] [Accepted: 01/14/2024] [Indexed: 05/02/2024] Open
Abstract
Objectives Vertebral fracture is both common and serious among adults, yet it often goes undiagnosed. This study aimed to develop a shape-based algorithm (SBA) for the automatic identification of vertebral fractures. Methods The study included 144 participants (50 individuals with a fracture and 94 without a fracture) whose plain thoracolumbar spine X-rays were taken. Clinical diagnosis of vertebral fracture (grade 0 to 3) was made by rheumatologists using Genant's semiquantitative method. The SBA algorithm was developed to determine the ratio of vertebral body height loss. Based on the ratio, SBA classifies a vertebra into 4 classes: 0 = normal, 1 = mild fracture, 2 = moderate fracture, 3 = severe fracture). The concordance between clinical diagnosis and SBA-based classification was assessed at both person and vertebra levels. Results At the person level, the SBA achieved a sensitivity of 100% and specificity of 62% (95% CI, 51%-72%). At the vertebra level, the SBA achieved a sensitivity of 84% (95% CI, 72%-93%), and a specificity of 88% (95% CI, 85%-90%). On average, the SBA took 0.3 s to assess each X-ray. Conclusions The SBA developed here is a fast and efficient tool that can be used to systematically screen for asymptomatic vertebral fractures and reduce the workload of healthcare professionals.
Collapse
Affiliation(s)
- Huy G. Nguyen
- School of Biomedical Engineering, University of Technology Sydney, Australia
- Bone and Muscle Research Group, Ton Duc Thang University, Ho Chi Minh City, Viet Nam
- Saigon Precision Medicine Research Center, Ho Chi Minh City, Viet Nam
| | - Hoa T. Nguyen
- Can Tho University of Medicine and Pharmacy, Can Tho City, Viet Nam
| | | | - Thach S. Tran
- School of Biomedical Engineering, University of Technology Sydney, Australia
| | - Lan T. Ho-Pham
- Bone and Muscle Research Group, Ton Duc Thang University, Ho Chi Minh City, Viet Nam
- Saigon Precision Medicine Research Center, Ho Chi Minh City, Viet Nam
- BioMedical Research Center, Pham Ngoc Thach University of Medicine, Ho Chi Minh City, Viet Nam
| | - Sai H. Ling
- School of Biomedical Engineering, University of Technology Sydney, Australia
| | - Tuan V. Nguyen
- School of Biomedical Engineering, University of Technology Sydney, Australia
- Tam Anh Research Institute, Tam Anh Hospital at Ho Chi Minh City, Ho Chi Minh City, Viet Nam
| |
Collapse
|
26
|
Gitto S, Serpi F, Albano D, Risoleo G, Fusco S, Messina C, Sconfienza LM. AI applications in musculoskeletal imaging: a narrative review. Eur Radiol Exp 2024; 8:22. [PMID: 38355767 PMCID: PMC10866817 DOI: 10.1186/s41747-024-00422-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 02/16/2024] Open
Abstract
This narrative review focuses on clinical applications of artificial intelligence (AI) in musculoskeletal imaging. A range of musculoskeletal disorders are discussed using a clinical-based approach, including trauma, bone age estimation, osteoarthritis, bone and soft-tissue tumors, and orthopedic implant-related pathology. Several AI algorithms have been applied to fracture detection and classification, which are potentially helpful tools for radiologists and clinicians. In bone age assessment, AI methods have been applied to assist radiologists by automatizing workflow, thus reducing workload and inter-observer variability. AI may potentially aid radiologists in identifying and grading abnormal findings of osteoarthritis as well as predicting the onset or progression of this disease. Either alone or combined with radiomics, AI algorithms may potentially improve diagnosis and outcome prediction of bone and soft-tissue tumors. Finally, information regarding appropriate positioning of orthopedic implants and related complications may be obtained using AI algorithms. In conclusion, rather than replacing radiologists, the use of AI should instead help them to optimize workflow, augment diagnostic performance, and keep up with ever-increasing workload.Relevance statement This narrative review provides an overview of AI applications in musculoskeletal imaging. As the number of AI technologies continues to increase, it will be crucial for radiologists to play a role in their selection and application as well as to fully understand their potential value in clinical practice. Key points • AI may potentially assist musculoskeletal radiologists in several interpretative tasks.• AI applications to trauma, age estimation, osteoarthritis, tumors, and orthopedic implants are discussed.• AI should help radiologists to optimize workflow and augment diagnostic performance.
Collapse
Affiliation(s)
- Salvatore Gitto
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Francesca Serpi
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Milan, Italy
| | - Giovanni Risoleo
- Scuola di Specializzazione in Radiodiagnostica, Università degli Studi di Milano, Milan, Italy
| | - Stefano Fusco
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
| | - Carmelo Messina
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Luca Maria Sconfienza
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy.
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.
| |
Collapse
|
27
|
Hoy MK, Desai V, Mutasa S, Hoy RC, Gorniak R, Belair JA. Deep Learning-Assisted Identification of Femoroacetabular Impingement (FAI) on Routine Pelvic Radiographs. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:339-346. [PMID: 38343231 DOI: 10.1007/s10278-023-00920-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/08/2023] [Accepted: 08/22/2023] [Indexed: 03/02/2024]
Abstract
To use a novel deep learning system to localize the hip joints and detect findings of cam-type femoroacetabular impingement (FAI). A retrospective search of hip/pelvis radiographs obtained in patients to evaluate for FAI yielded 3050 total studies. Each hip was classified separately by the original interpreting radiologist in the following manner: 724 hips had severe cam-type FAI morphology, 962 moderate cam-type FAI morphology, 846 mild cam-type FAI morphology, and 518 hips were normal. The anteroposterior (AP) view from each study was anonymized and extracted. After localization of the hip joints by a novel convolutional neural network (CNN) based on the focal loss principle, a second CNN classified the images of the hip as cam positive, or no FAI. Accuracy was 74% for diagnosing normal vs. abnormal cam-type FAI morphology, with aggregate sensitivity and specificity of 0.821 and 0.669, respectively, at the chosen operating point. The aggregate AUC was 0.736. A deep learning system can be applied to detect FAI-related changes on single view pelvic radiographs. Deep learning is useful for quickly identifying and categorizing pathology on imaging, which may aid the interpreting radiologist.
Collapse
Affiliation(s)
| | - Vishal Desai
- Thomas Jefferson University, Philadelphia, PA, USA
| | | | - Robert C Hoy
- Temple University Hospital, Philadelphia, PA, USA
| | | | | |
Collapse
|
28
|
Maki S, Furuya T, Inoue M, Shiga Y, Inage K, Eguchi Y, Orita S, Ohtori S. Machine Learning and Deep Learning in Spinal Injury: A Narrative Review of Algorithms in Diagnosis and Prognosis. J Clin Med 2024; 13:705. [PMID: 38337399 PMCID: PMC10856760 DOI: 10.3390/jcm13030705] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/14/2024] [Accepted: 01/18/2024] [Indexed: 02/12/2024] Open
Abstract
Spinal injuries, including cervical and thoracolumbar fractures, continue to be a major public health concern. Recent advancements in machine learning and deep learning technologies offer exciting prospects for improving both diagnostic and prognostic approaches in spinal injury care. This narrative review systematically explores the practical utility of these computational methods, with a focus on their application in imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI), as well as in structured clinical data. Of the 39 studies included, 34 were focused on diagnostic applications, chiefly using deep learning to carry out tasks like vertebral fracture identification, differentiation between benign and malignant fractures, and AO fracture classification. The remaining five were prognostic, using machine learning to analyze parameters for predicting outcomes such as vertebral collapse and future fracture risk. This review highlights the potential benefit of machine learning and deep learning in spinal injury care, especially their roles in enhancing diagnostic capabilities, detailed fracture characterization, risk assessments, and individualized treatment planning.
Collapse
Affiliation(s)
- Satoshi Maki
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Masahiro Inoue
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Sumihisa Orita
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| |
Collapse
|
29
|
Jung J, Dai J, Liu B, Wu Q. Artificial intelligence in fracture detection with different image modalities and data types: A systematic review and meta-analysis. PLOS DIGITAL HEALTH 2024; 3:e0000438. [PMID: 38289965 PMCID: PMC10826962 DOI: 10.1371/journal.pdig.0000438] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 12/25/2023] [Indexed: 02/01/2024]
Abstract
Artificial Intelligence (AI), encompassing Machine Learning and Deep Learning, has increasingly been applied to fracture detection using diverse imaging modalities and data types. This systematic review and meta-analysis aimed to assess the efficacy of AI in detecting fractures through various imaging modalities and data types (image, tabular, or both) and to synthesize the existing evidence related to AI-based fracture detection. Peer-reviewed studies developing and validating AI for fracture detection were identified through searches in multiple electronic databases without time limitations. A hierarchical meta-analysis model was used to calculate pooled sensitivity and specificity. A diagnostic accuracy quality assessment was performed to evaluate bias and applicability. Of the 66 eligible studies, 54 identified fractures using imaging-related data, nine using tabular data, and three using both. Vertebral fractures were the most common outcome (n = 20), followed by hip fractures (n = 18). Hip fractures exhibited the highest pooled sensitivity (92%; 95% CI: 87-96, p< 0.01) and specificity (90%; 95% CI: 85-93, p< 0.01). Pooled sensitivity and specificity using image data (92%; 95% CI: 90-94, p< 0.01; and 91%; 95% CI: 88-93, p < 0.01) were higher than those using tabular data (81%; 95% CI: 77-85, p< 0.01; and 83%; 95% CI: 76-88, p < 0.01), respectively. Radiographs demonstrated the highest pooled sensitivity (94%; 95% CI: 90-96, p < 0.01) and specificity (92%; 95% CI: 89-94, p< 0.01). Patient selection and reference standards were major concerns in assessing diagnostic accuracy for bias and applicability. AI displays high diagnostic accuracy for various fracture outcomes, indicating potential utility in healthcare systems for fracture diagnosis. However, enhanced transparency in reporting and adherence to standardized guidelines are necessary to improve the clinical applicability of AI. Review Registration: PROSPERO (CRD42021240359).
Collapse
Affiliation(s)
- Jongyun Jung
- Department of Biomedical Informatics (Dr. Qing Wu, Jongyun Jung, and Jingyuan Dai), College of Medicine, The Ohio State University, Columbus, Ohio, United States of America
| | - Jingyuan Dai
- Department of Biomedical Informatics (Dr. Qing Wu, Jongyun Jung, and Jingyuan Dai), College of Medicine, The Ohio State University, Columbus, Ohio, United States of America
| | - Bowen Liu
- Department of Mathematics and Statistics, Division of Computing, Analytics, and Mathematics, School of Science and Engineering (Bowen Liu), University of Missouri-Kansas City, Kansas City, Missouri, United States of America
| | - Qing Wu
- Department of Biomedical Informatics (Dr. Qing Wu, Jongyun Jung, and Jingyuan Dai), College of Medicine, The Ohio State University, Columbus, Ohio, United States of America
| |
Collapse
|
30
|
Zhong S, Yin X, Li X, Feng C, Gao Z, Liao X, Yang S, He S. Artificial intelligence applications in bone fractures: A bibliometric and science mapping analysis. Digit Health 2024; 10:20552076241279238. [PMID: 39257873 PMCID: PMC11384526 DOI: 10.1177/20552076241279238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/13/2024] [Indexed: 09/12/2024] Open
Abstract
Background Bone fractures are a common medical issue worldwide, causing a serious economic burden on society. In recent years, the application of artificial intelligence (AI) in the field of fracture has developed rapidly, especially in fracture diagnosis, where AI has shown significant capabilities comparable to those of professional orthopedic surgeons. This study aimed to review the development process and applications of AI in the field of fracture using bibliometric analysis, while analyzing the research hotspots and future trends in the field. Materials and methods Studies on AI and fracture were retrieved from the Web of Science Core Collections since 1990, a retrospective bibliometric and visualized study of the filtered data was conducted through CiteSpace and Bibliometrix R package. Results A total of 1063 publications were included in the analysis, with the annual publication rapidly growing since 2017. China had the most publications, and the United States had the most citations. Technical University of Munich, Germany, had the most publications. Doornberg JN was the most productive author. Most research in this field was published in Scientific Reports. Doi K's 2007 review in Computerized Medical Imaging and Graphics was the most influential paper. Conclusion AI application in fracture has achieved outstanding results and will continue to progress. In this study, we used a bibliometric analysis to assist researchers in understanding the basic knowledge structure, research hotspots, and future trends in this field, to further promote the development of AI applications in fracture.
Collapse
Affiliation(s)
- Sen Zhong
- Department of Orthopedic, Spinal Pain Research Institute, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaobing Yin
- Nursing Department, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiaolan Li
- Fuzhou Medical College of Nanchang University, School of Stomatology, Fuzhou, China
| | - Chaobo Feng
- National Key Clinical Pain Medicine of China, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, China
| | - Zhiqiang Gao
- Department of Joint Surgery, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xiang Liao
- National Key Clinical Pain Medicine of China, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, China
| | - Sheng Yang
- Department of Orthopedic, Spinal Pain Research Institute, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| | - Shisheng He
- Department of Orthopedic, Spinal Pain Research Institute, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
31
|
Nicolaes J, Liu Y, Zhao Y, Huang P, Wang L, Yu A, Dunkel J, Libanati C, Cheng X. External validation of a convolutional neural network algorithm for opportunistically detecting vertebral fractures in routine CT scans. Osteoporos Int 2024; 35:143-152. [PMID: 37674097 PMCID: PMC10786735 DOI: 10.1007/s00198-023-06903-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 08/29/2023] [Indexed: 09/08/2023]
Abstract
The Convolutional Neural Network algorithm achieved a sensitivity of 94% and specificity of 93% in identifying scans with vertebral fractures (VFs). The external validation results suggest that the algorithm provides an opportunity to aid radiologists with the early identification of VFs in routine CT scans of abdomen and chest. PURPOSE To evaluate the performance of a previously trained Convolutional Neural Network (CNN) model to automatically detect vertebral fractures (VFs) in CT scans in an external validation cohort. METHODS Two Chinese studies and clinical data were used to retrospectively select CT scans of the chest, abdomen and thoracolumbar spine in men and women aged ≥50 years. The CT scans were assessed using the semiquantitative (SQ) Genant classification for prevalent VFs in a process blinded to clinical information. The performance of the CNN model was evaluated against reference standard readings by the area under the receiver operating characteristics curve (AUROC), accuracy, Cohen's kappa, sensitivity, and specificity. RESULTS A total of 4,810 subjects were included, with a median age of 62 years (IQR 56-67), of which 2,654 (55.2%) were females. The scans were acquired between January 2013 and January 2019 on 16 different CT scanners from three different manufacturers. 2,773 (57.7%) were abdominal CTs. A total of 628 scans (13.1%) had ≥1 VF (grade 2-3), representing 899 fractured vertebrae out of a total of 48,584 (1.9%) visualized vertebral bodies. The CNN's performance in identifying scans with ≥1 moderate or severe fractures achieved an AUROC of 0.94 (95% CI: 0.93-0.95), accuracy of 93% (95% CI: 93%-94%), kappa of 0.75 (95% CI: 0.72-0.77), a sensitivity of 94% (95% CI: 92-96%) and a specificity of 93% (95% CI: 93-94%). CONCLUSION The algorithm demonstrated excellent performance in the identification of vertebral fractures in a cohort of chest and abdominal CT scans of Chinese patients ≥50 years.
Collapse
Affiliation(s)
- Joeri Nicolaes
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium.
- UCB Pharma, Brussels, Belgium.
| | - Yandong Liu
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, 100035, China
| | - Yue Zhao
- Department of Radiology, Qingdao Fuwaicardiovascular Hospital, Qingdao, 26600, China
| | - Pengju Huang
- Department of Radiology, Beijing Anding Hospital, Beijing, 100120, China
| | - Ling Wang
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, 100035, China
| | - Aihong Yu
- Department of Radiology, Beijing Anding Hospital, Beijing, 100120, China
| | | | | | - Xiaoguang Cheng
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, 100035, China
| |
Collapse
|
32
|
Kolasa K, Admassu B, Hołownia-Voloskova M, Kędzior KJ, Poirrier JE, Perni S. Systematic reviews of machine learning in healthcare: a literature review. Expert Rev Pharmacoecon Outcomes Res 2024; 24:63-115. [PMID: 37955147 DOI: 10.1080/14737167.2023.2279107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
INTRODUCTION The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. METHODS A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. RESULTS In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). EXPERT OPINION The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kolasa
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | - Bisrat Admassu
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | | | | | | | | |
Collapse
|
33
|
Nicolaes J, Skjødt MK, Raeymaeckers S, Smith CD, Abrahamsen B, Fuerst T, Debois M, Vandermeulen D, Libanati C. Towards Improved Identification of Vertebral Fractures in Routine Computed Tomography (CT) Scans: Development and External Validation of a Machine Learning Algorithm. J Bone Miner Res 2023; 38:1856-1866. [PMID: 37747147 DOI: 10.1002/jbmr.4916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 09/06/2023] [Accepted: 09/17/2023] [Indexed: 09/26/2023]
Abstract
Vertebral fractures (VFs) are the hallmark of osteoporosis, being one of the most frequent types of fragility fracture and an early sign of the disease. They are associated with significant morbidity and mortality. VFs are incidentally found in one out of five imaging studies, however, more than half of the VFs are not identified nor reported in patient computed tomography (CT) scans. Our study aimed to develop a machine learning algorithm to identify VFs in abdominal/chest CT scans and evaluate its performance. We acquired two independent data sets of routine abdominal/chest CT scans of patients aged 50 years or older: a training set of 1011 scans from a non-interventional, prospective proof-of-concept study at the Universitair Ziekenhuis (UZ) Brussel and a validation set of 2000 subjects from an observational cohort study at the Hospital of Holbaek. Both data sets were externally reevaluated to identify reference standard VF readings using the Genant semiquantitative (SQ) grading. Four independent models have been trained in a cross-validation experiment using the training set and an ensemble of four models has been applied to the external validation set. The validation set contained 15.3% scans with one or more VF (SQ2-3), whereas 663 of 24,930 evaluable vertebrae (2.7%) were fractured (SQ2-3) as per reference standard readings. Comparison of the ensemble model with the reference standard readings in identifying subjects with one or more moderate or severe VF resulted in an area under the receiver operating characteristic curve (AUROC) of 0.88 (95% confidence interval [CI], 0.85-0.90), accuracy of 0.92 (95% CI, 0.91-0.93), kappa of 0.72 (95% CI, 0.67-0.76), sensitivity of 0.81 (95% CI, 0.76-0.85), and specificity of 0.95 (95% CI, 0.93-0.96). We demonstrated that a machine learning algorithm trained for VF detection achieved strong performance on an external validation set. It has the potential to support healthcare professionals with the early identification of VFs and prevention of future fragility fractures. © 2023 UCB S.A. and The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Joeri Nicolaes
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium
- UCB Pharma, Brussels, Belgium
| | - Michael Kriegbaum Skjødt
- Department of Medicine, Hospital of Holbaek, Holbaek, Denmark
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
| | | | - Christopher Dyer Smith
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
| | - Bo Abrahamsen
- Department of Medicine, Hospital of Holbaek, Holbaek, Denmark
- OPEN-Open Patient Data Explorative Network, Department of Clinical Research, University of Southern Denmark and Odense University Hospital, Odense, Denmark
- NDORMS, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Oxford University Hospitals, Oxford, UK
| | | | | | - Dirk Vandermeulen
- Department of Electrical Engineering (ESAT), Center for Processing Speech and Images, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
34
|
Dong Q, Luo G, Lane NE, Lui LY, Marshall LM, Johnston SK, Dabbous H, O'Reilly M, Linnau KF, Perry J, Chang BC, Renslo J, Haynor D, Jarvik JG, Cross NM. Generalizability of Deep Learning Classification of Spinal Osteoporotic Compression Fractures on Radiographs Using an Adaptation of the Modified-2 Algorithm-Based Qualitative Criteria. Acad Radiol 2023; 30:2973-2987. [PMID: 37438161 PMCID: PMC10776803 DOI: 10.1016/j.acra.2023.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 04/13/2023] [Accepted: 04/20/2023] [Indexed: 07/14/2023]
Abstract
RATIONALE AND OBJECTIVES Spinal osteoporotic compression fractures (OCFs) can be an early biomarker for osteoporosis but are often subtle, incidental, and underreported. To ensure early diagnosis and treatment of osteoporosis, we aimed to build a deep learning vertebral body classifier for OCFs as a critical component of our future automated opportunistic screening tool. MATERIALS AND METHODS We retrospectively assembled a local dataset, including 1790 subjects and 15,050 vertebral bodies (thoracic and lumbar). Each vertebral body was annotated using an adaption of the modified-2 algorithm-based qualitative criteria. The Osteoporotic Fractures in Men (MrOS) Study dataset provided thoracic and lumbar spine radiographs of 5994 men from six clinical centers. Using both datasets, five deep learning algorithms were trained to classify each individual vertebral body of the spine radiographs. Classification performance was compared for these models using multiple metrics, including the area under the receiver operating characteristic curve (AUC-ROC), sensitivity, specificity, and positive predictive value (PPV). RESULTS Our best model, built with ensemble averaging, achieved an AUC-ROC of 0.948 and 0.936 on the local dataset's test set and the MrOS dataset's test set, respectively. After setting the cutoff threshold to prioritize PPV, this model achieved a sensitivity of 54.5% and 47.8%, a specificity of 99.7% and 99.6%, and a PPV of 89.8% and 94.8%. CONCLUSION Our model achieved an AUC-ROC>0.90 on both datasets. This testing shows some generalizability to real-world clinical datasets and a suitable performance for a future opportunistic osteoporosis screening tool.
Collapse
Affiliation(s)
- Qifei Dong
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington (Q.D., G.L., B.C.C.)
| | - Gang Luo
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington (Q.D., G.L., B.C.C.)
| | - Nancy E Lane
- Department of Medicine, University of California - Davis, Sacramento, California (N.E.L.)
| | - Li-Yung Lui
- Research Institute, California Pacific Medical Center, San Francisco, California (L.-Y.L.)
| | - Lynn M Marshall
- Epidemiology Programs, Oregon Health and Science University-Portland State University School of Public Health, Portland, Oregon (L.M.M.)
| | - Sandra K Johnston
- Department of Radiology, University of Washington, Seattle, Washington (S.K.J., K.F.L., D.H., N.M.C)
| | - Howard Dabbous
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia (H.D.)
| | - Michael O'Reilly
- Department of Radiology, University of Limerick Hospital Group, Limerick, Ireland (M.O.)
| | - Ken F Linnau
- Department of Radiology, University of Washington, Seattle, Washington (S.K.J., K.F.L., D.H., N.M.C)
| | - Jessica Perry
- Department of Biostatistics, University of Washington, Seattle, Washington (J.P.)
| | - Brian C Chang
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, Washington (Q.D., G.L., B.C.C.)
| | - Jonathan Renslo
- Keck School of Medicine, University of Southern California, Los Angeles, California (J.R.)
| | - David Haynor
- Department of Radiology, University of Washington, Seattle, Washington (S.K.J., K.F.L., D.H., N.M.C)
| | - Jeffrey G Jarvik
- Departments of Radiology and Neurological Surgery, University of Washington, Seattle, Washington (J.G.J)
| | - Nathan M Cross
- Department of Radiology, University of Washington, Seattle, Washington (S.K.J., K.F.L., D.H., N.M.C).
| |
Collapse
|
35
|
Jo SW, Khil EK, Lee KY, Choi I, Yoon YS, Cha JG, Lee JH, Kim H, Lee SY. Deep learning system for automated detection of posterior ligamentous complex injury in patients with thoracolumbar fracture on MRI. Sci Rep 2023; 13:19017. [PMID: 37923853 PMCID: PMC10624679 DOI: 10.1038/s41598-023-46208-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 10/29/2023] [Indexed: 11/06/2023] Open
Abstract
This study aimed to develop a deep learning (DL) algorithm for automated detection and localization of posterior ligamentous complex (PLC) injury in patients with acute thoracolumbar (TL) fracture on magnetic resonance imaging (MRI) and evaluate its diagnostic performance. In this retrospective multicenter study, using midline sagittal T2-weighted image with fracture (± PLC injury), a training dataset and internal and external validation sets of 300, 100, and 100 patients, were constructed with equal numbers of injured and normal PLCs. The DL algorithm was developed through two steps (Attention U-net and Inception-ResNet-V2). We evaluate the diagnostic performance for PLC injury between the DL algorithm and radiologists with different levels of experience. The area under the curves (AUCs) generated by the DL algorithm were 0.928, 0.916 for internal and external validations, and by two radiologists for observer performance test were 0.930, 0.830, respectively. Although no significant difference was found in diagnosing PLC injury between the DL algorithm and radiologists, the DL algorithm exhibited a trend of higher AUC than the radiology trainee. Notably, the radiology trainee's diagnostic performance significantly improved with DL algorithm assistance. Therefore, the DL algorithm exhibited high diagnostic performance in detecting PLC injuries in acute TL fractures.
Collapse
Affiliation(s)
- Sang Won Jo
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea
| | - Eun Kyung Khil
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea.
- Department of Radiology, Fastbone Orthopedic Hospital, Hwaseong-si, Republic of Korea.
| | - Kyoung Yeon Lee
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, 7, Keunjaebong-gil, Hwaseong-si, Republic of Korea
| | - Il Choi
- Department of Neurologic Surgery, Hallym University Dongtan Sacred Heart Hospital, Hwaseong-si, Republic of Korea
| | - Yu Sung Yoon
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea
- Department of Radiology, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Jang Gyu Cha
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon, Republic of Korea
| | | | | | | |
Collapse
|
36
|
Niemeyer F, Galbusera F, Tao Y, Phillips FM, An HS, Louie PK, Samartzis D, Wilke HJ. Deep phenotyping the cervical spine: automatic characterization of cervical degenerative phenotypes based on T2-weighted MRI. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2023; 32:3846-3856. [PMID: 37644278 DOI: 10.1007/s00586-023-07909-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 04/17/2023] [Accepted: 08/17/2023] [Indexed: 08/31/2023]
Abstract
PURPOSE Radiological degenerative phenotypes provide insight into a patient's overall extent of disease and can be predictive for future pathological developments as well as surgical outcomes and complications. The objective of this study was to develop a reliable method for automatically classifying sagittal MRI image stacks of cervical spinal segments with respect to these degenerative phenotypes. METHODS We manually evaluated sagittal image data of the cervical spine of 873 patients (5182 motion segments) with respect to 5 radiological phenotypes. We then used this data set as ground truth for training a range of multi-class multi-label deep learning-based models to classify each motion segment automatically, on which we then performed hyper-parameter optimization. RESULTS The ground truth evaluations turned out to be relatively balanced for the labels disc displacement posterior, osteophyte anterior superior, osteophyte posterior superior, and osteophyte posterior inferior. Although we could not identify a single model that worked equally well across all the labels, the 3D-convolutional approach turned out to be preferable for classifying all labels. CONCLUSIONS Class imbalance in the training data and label noise made it difficult to achieve high predictive power for underrepresented classes. This shortcoming will be mitigated in the future versions by extending the training data set accordingly. Nevertheless, the classification performance rivals and in some cases surpasses that of human raters, while speeding up the evaluation process to only require a few seconds.
Collapse
Affiliation(s)
- Frank Niemeyer
- Institute for Orthopaedic Research and Biomechanics, Trauma Research Center Ulm, University Hospital Ulm, Ulm, Germany
| | - Fabio Galbusera
- Department of Teaching, Research and Development, Schulthess Clinic, Spine Center, Lengghalde 2, 8008, Zurich, Switzerland.
| | - Youping Tao
- Institute for Orthopaedic Research and Biomechanics, Trauma Research Center Ulm, University Hospital Ulm, Ulm, Germany
| | - Frank M Phillips
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Howard S An
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Philip K Louie
- Spine Clinic, Virginia Mason Medical Center, Seattle, WA, USA
| | - Dino Samartzis
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, IL, USA
| | - Hans-Joachim Wilke
- Institute for Orthopaedic Research and Biomechanics, Trauma Research Center Ulm, University Hospital Ulm, Ulm, Germany
| |
Collapse
|
37
|
Woodman RJ, Mangoni AA. A comprehensive review of machine learning algorithms and their application in geriatric medicine: present and future. Aging Clin Exp Res 2023; 35:2363-2397. [PMID: 37682491 PMCID: PMC10627901 DOI: 10.1007/s40520-023-02552-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
The increasing access to health data worldwide is driving a resurgence in machine learning research, including data-hungry deep learning algorithms. More computationally efficient algorithms now offer unique opportunities to enhance diagnosis, risk stratification, and individualised approaches to patient management. Such opportunities are particularly relevant for the management of older patients, a group that is characterised by complex multimorbidity patterns and significant interindividual variability in homeostatic capacity, organ function, and response to treatment. Clinical tools that utilise machine learning algorithms to determine the optimal choice of treatment are slowly gaining the necessary approval from governing bodies and being implemented into healthcare, with significant implications for virtually all medical disciplines during the next phase of digital medicine. Beyond obtaining regulatory approval, a crucial element in implementing these tools is the trust and support of the people that use them. In this context, an increased understanding by clinicians of artificial intelligence and machine learning algorithms provides an appreciation of the possible benefits, risks, and uncertainties, and improves the chances for successful adoption. This review provides a broad taxonomy of machine learning algorithms, followed by a more detailed description of each algorithm class, their purpose and capabilities, and examples of their applications, particularly in geriatric medicine. Additional focus is given on the clinical implications and challenges involved in relying on devices with reduced interpretability and the progress made in counteracting the latter via the development of explainable machine learning.
Collapse
Affiliation(s)
- Richard J Woodman
- Centre of Epidemiology and Biostatistics, College of Medicine and Public Health, Flinders University, GPO Box 2100, Adelaide, SA, 5001, Australia.
| | - Arduino A Mangoni
- Discipline of Clinical Pharmacology, College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
- Department of Clinical Pharmacology, Flinders Medical Centre, Southern Adelaide Local Health Network, Adelaide, SA, Australia
| |
Collapse
|
38
|
Song Q, diFlorio-Alexander RM, Sieberg RT, Dwan D, Boyce W, Stumetz K, Patel SD, Karagas MR, MacKenzie TA, Hassanpour S. Automated classification of fat-infiltrated axillary lymph nodes on screening mammograms. Br J Radiol 2023; 96:20220835. [PMID: 37751215 PMCID: PMC10607412 DOI: 10.1259/bjr.20220835] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 06/06/2023] [Accepted: 07/16/2023] [Indexed: 09/27/2023] Open
Abstract
OBJECTIVE Fat-infiltrated axillary lymph nodes (LNs) are unique sites for ectopic fat deposition. Early studies showed a strong correlation between fatty LNs and obesity-related diseases. Confirming this correlation requires large-scale studies, hindered by scarce labeled data. With the long-term goal of developing a rapid and generalizable tool to aid data labeling, we developed an automated deep learning (DL)-based pipeline to classify the status of fatty LNs on screening mammograms. METHODS Our internal data set included 886 mammograms from a tertiary academic medical institution, with a binary status of the fat-infiltrated LNs based on the size and morphology of the largest visible axillary LN. A two-stage DL model training and fine-tuning pipeline was developed to classify the fat-infiltrated LN status using the internal training and development data set. The model was evaluated on a held-out internal test set and a subset of the Digital Database for Screening Mammography. RESULTS Our model achieved 0.97 (95% CI: 0.94-0.99) accuracy and 1.00 (95% CI: 1.00-1.00) area under the receiver operator characteristic curve on 264 internal testing mammograms, and 0.82 (95% CI: 0.77-0.86) accuracy and 0.87 (95% CI: 0.82-0.91) area under the receiver operator characteristic curve on 70 external testing mammograms. CONCLUSION This study confirmed the feasibility of using a DL model for fat-infiltrated LN classification. The model provides a practical tool to identify fatty LNs on mammograms and to allow for future large-scale studies to evaluate the role of fatty LNs as an imaging biomarker of obesity-associated pathologies. ADVANCES IN KNOWLEDGE Our study is the first to classify fatty LNs using an automated DL approach.
Collapse
Affiliation(s)
- Qingyuan Song
- Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Lebanon, New Hampshire, United States
| | | | - Ryan T. Sieberg
- Department of Radiology, School of Medicine, University of California, San Francisco, California, United States
| | - Dennis Dwan
- Department of Internal Medicine, Carney Hospital, Dorchester, Massachusetts, United States
| | - William Boyce
- Geisel School of Medicine, Dartmouth College, Lebanon, New Hampshire, United States
| | - Kyle Stumetz
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, United States
| | - Sohum D. Patel
- Department of Radiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire, United States
| | - Margaret R. Karagas
- Department of Epidemiology, Geisel School of Medicine, Dartmouth College, Lebanon, New Hampshire, United States
| | - Todd A. MacKenzie
- Department of Biomedical Data Science, Geisel School of Medicine, Dartmouth College, Lebanon, New Hampshire, United States
| | | |
Collapse
|
39
|
Wu W, Liu X, Hamilton RB, Suriawinata AA, Hassanpour S. Graph Convolutional Neural Networks for Histologic Classification of Pancreatic Cancer. Arch Pathol Lab Med 2023; 147:1251-1260. [PMID: 36669509 PMCID: PMC10356903 DOI: 10.5858/arpa.2022-0035-oa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/29/2022] [Indexed: 01/22/2023]
Abstract
CONTEXT.— Pancreatic ductal adenocarcinoma has some of the worst prognostic outcomes among various cancer types. Detection of histologic patterns of pancreatic tumors is essential to predict prognosis and decide the treatment for patients. This histologic classification can have a large degree of variability even among expert pathologists. OBJECTIVE.— To detect aggressive adenocarcinoma and less aggressive pancreatic tumors from nonneoplasm cases using a graph convolutional network-based deep learning model. DESIGN.— Our model uses a convolutional neural network to extract detailed information from every small region in a whole slide image. Then, we use a graph architecture to aggregate the extracted features from these regions and their positional information to capture the whole slide-level structure and make the final prediction. RESULTS.— We evaluated our model on an independent test set and achieved an F1 score of 0.85 for detecting neoplastic cells and ductal adenocarcinoma, significantly outperforming other baseline methods. CONCLUSIONS.— If validated in prospective studies, this approach has a great potential to assist pathologists in identifying adenocarcinoma and other types of pancreatic tumors in clinical settings.
Collapse
Affiliation(s)
- Weiyi Wu
- From the Department of Biomedical Data Science (Wu, Hassanpour), Geisel School of Medicine, Hanover, New Hampshire
| | - Xiaoying Liu
- The Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire (Liu, Hamilton, Suriawinata)
| | - Robert B Hamilton
- The Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire (Liu, Hamilton, Suriawinata)
| | - Arief A Suriawinata
- The Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire (Liu, Hamilton, Suriawinata)
| | - Saeed Hassanpour
- From the Department of Biomedical Data Science (Wu, Hassanpour), Geisel School of Medicine, Hanover, New Hampshire
- The Department of Epidemiology (Hassanpour), Geisel School of Medicine, Hanover, New Hampshire
- The Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire (Liu, Hamilton, Suriawinata)
- The Department of Computer Science, Dartmouth College, Hanover, New Hampshire (Hassanpour)
| |
Collapse
|
40
|
Shen L, Gao C, Hu S, Kang D, Zhang Z, Xia D, Xu Y, Xiang S, Zhu Q, Xu G, Tang F, Yue H, Yu W, Zhang Z. Using Artificial Intelligence to Diagnose Osteoporotic Vertebral Fractures on Plain Radiographs. J Bone Miner Res 2023; 38:1278-1287. [PMID: 37449775 DOI: 10.1002/jbmr.4879] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/18/2023] [Accepted: 07/06/2023] [Indexed: 07/18/2023]
Abstract
Osteoporotic vertebral fracture (OVF) is a risk factor for morbidity and mortality in elderly population, and accurate diagnosis is important for improving treatment outcomes. OVF diagnosis suffers from high misdiagnosis and underdiagnosis rates, as well as high workload. Deep learning methods applied to plain radiographs, a simple, fast, and inexpensive examination, might solve this problem. We developed and validated a deep-learning-based vertebral fracture diagnostic system using area loss ratio, which assisted a multitasking network to perform skeletal position detection and segmentation and identify and grade vertebral fractures. As the training set and internal validation set, we used 11,397 plain radiographs from six community centers in Shanghai. For the external validation set, 1276 participants were recruited from the outpatient clinic of the Shanghai Sixth People's Hospital (1276 plain radiographs). Radiologists performed all X-ray images and used the Genant semiquantitative tool for fracture diagnosis and grading as the ground truth data. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were used to evaluate diagnostic performance. The AI_OVF_SH system demonstrated high accuracy and computational speed in skeletal position detection and segmentation. In the internal validation set, the accuracy, sensitivity, and specificity with the AI_OVF_SH model were 97.41%, 84.08%, and 97.25%, respectively, for all fractures. The sensitivity and specificity for moderate fractures were 88.55% and 99.74%, respectively, and for severe fractures, they were 92.30% and 99.92%. In the external validation set, the accuracy, sensitivity, and specificity for all fractures were 96.85%, 83.35%, and 94.70%, respectively. For moderate fractures, the sensitivity and specificity were 85.61% and 99.85%, respectively, and 93.46% and 99.92% for severe fractures. Therefore, the AI_OVF_SH system is an efficient tool to assist radiologists and clinicians to improve the diagnosing of vertebral fractures. © 2023 The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Li Shen
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chao Gao
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shundong Hu
- Department of Radiology, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Dan Kang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Zhaogang Zhang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Dongdong Xia
- Department of Orthopaedics, Ning Bo First Hospital, Zhejiang, China
| | - Yiren Xu
- Department of Radiology, Ning Bo First Hospital, Zhejiang, China
| | - Shoukui Xiang
- Department of Endocrinology and Metabolism, The First People's Hospital of Changzhou, Changzhou, China
| | - Qiong Zhu
- Kangjian Community Health Service Center, Shanghai, China
| | - GeWen Xu
- Kangjian Community Health Service Center, Shanghai, China
| | - Feng Tang
- Jinhui Community Health Service Center, Shanghai, China
| | - Hua Yue
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Yu
- Department of Radiology, Peking Union Medical College Hospital, Beijing, China
| | - Zhenlin Zhang
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
41
|
Zeng B, Wang H, Xu J, Tu P, Joskowicz L, Chen X. Two-Stage Structure-Focused Contrastive Learning for Automatic Identification and Localization of Complex Pelvic Fractures. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2751-2762. [PMID: 37030821 DOI: 10.1109/tmi.2023.3264298] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Pelvic fracture is a severe trauma with a high rate of morbidity and mortality. Accurate and automatic diagnosis and surgical planning of pelvic fracture require effective identification and localization of the fracture zones. This is a challenging task due to the complexity of pelvic fractures, which often exhibit multiple fragments and sites, large fragment size differences, and irregular morphology. We have developed a novel two-stage method for the automatic identification and localization of complex pelvic fractures. Our method is unique in that it allows to combine the symmetry properties of the pelvic anatomy and capture the symmetric feature differences caused by the fracture on both the left and right sides, thereby overcoming the limitations of existing methods which consider only image or geometric features. It implements supervised contrastive learning with a novel Siamese deep neural network, which consists of two weight-shared branches with a structural attention mechanism, to minimize the confusion of local complex structures of the pelvic bones with the fracture zones. A structure-focused attention (SFA) module is designed to capture the spatial structural features and enhances the recognition ability of fracture zones. Comprehensive experiments on 103 clinical CT scans from the publicly available dataset CTPelvic1K show that our method achieves a mean accuracy and sensitivity of 0.92 and 0.93, which are superior to those reported with three SOTA contrastive learning methods and five advanced classification networks, demonstrating the effectiveness of identifying and localizing various types of complex pelvic fractures from clinical CT images.
Collapse
|
42
|
Ackermann J, Hoch A, Snedeker JG, Zingg PO, Esfandiari H, Fürnstahl P. Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions. J Imaging 2023; 9:180. [PMID: 37754944 PMCID: PMC10532700 DOI: 10.3390/jimaging9090180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 08/21/2023] [Accepted: 08/27/2023] [Indexed: 09/28/2023] Open
Abstract
In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions.
Collapse
Affiliation(s)
- Joëlle Ackermann
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland
| | - Armando Hoch
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Jess Gerrit Snedeker
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Patrick Oliver Zingg
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
43
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
44
|
Guenoun D, Champsaur P. Opportunistic Computed Tomography Screening for Osteoporosis and Fracture. Semin Musculoskelet Radiol 2023; 27:451-456. [PMID: 37748468 DOI: 10.1055/s-0043-1771037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Osteoporosis is underdiagnosed and undertreated, leading to loss of treatment for the patient and high costs for the health care system. Routine thoracic and/or abdominal computed tomography (CT) performed for other indications can screen opportunistically for osteoporosis with no extra cost, time, or irradiation. Various methods can quantify fracture risk on opportunistic clinical CT: vertebral Hounsfield unit bone mineral density (BMD), usually of L1; BMD measurement with asynchronous or internal calibration; quantitative CT; bone texture assessment; and finite element analysis. Screening for osteoporosis and vertebral fractures on opportunistic CT is a promising approach, providing automated fracture risk scores by means of artificial intelligence, thus enabling earlier management.
Collapse
Affiliation(s)
- Daphne Guenoun
- APHM, Sainte-Marguerite Hospital, Institute for Locomotion, Department of Radiology, Marseille, France
- Aix-Marseille University, CNRS, Institut des Sciences du Mouvement, Marseille, France
| | - Pierre Champsaur
- APHM, Sainte-Marguerite Hospital, Institute for Locomotion, Department of Radiology, Marseille, France
- Aix-Marseille University, CNRS, Institut des Sciences du Mouvement, Marseille, France
| |
Collapse
|
45
|
Page JH, Moser FG, Maya MM, Prasad R, Pressman BD. Opportunistic CT Screening-Machine Learning Algorithm Identifies Majority of Vertebral Compression Fractures: A Cohort Study. JBMR Plus 2023; 7:e10778. [PMID: 37614306 PMCID: PMC10443072 DOI: 10.1002/jbm4.10778] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 05/17/2023] [Indexed: 08/25/2023] Open
Abstract
Vertebral compression fractures (VCF) are common in patients older than 50 years but are often undiagnosed. Zebra Medical Imaging developed a VCF detection algorithm, with machine learning, to detect VCFs from CT images of the chest and/or abdomen/pelvis. In this study, we evaluated the diagnostic performance of the algorithm in identifying VCF. We conducted a blinded validation study to estimate the operating characteristics of the algorithm in identifying VCFs using previously completed CT scans from 1200 women and men aged 50 years and older at a tertiary-care center. Each scan was independently evaluated by two of three neuroradiologists to identify and grade VCF. Disagreements were resolved by a senior neuroradiologist. The algorithm evaluated the CT scans in a separate workstream. The VCF algorithm was not able to evaluate CT scans for 113 participants. Of the remaining 1087 study participants, 588 (54%) were women. Median age was 73 years (range 51-102 years; interquartile range 66-81). For the 1087 algorithm-evaluated participants, the sensitivity and specificity of the VCF algorithm in diagnosing any VCF were 0.66 (95% confidence interval [CI] 0.59-0.72) and 0.90 (95% CI 0.88-0.92), respectively, and for diagnosing moderate/severe VCF were 0.78 (95% CI 0.70-0.85) and 0.87 (95% CI 0.85-0.89), respectively. Implementing this VCF algorithm within radiology systems may help to identify patients at increased fracture risk and could support the diagnosis of osteoporosis and facilitate appropriate therapy. © 2023 Amgen, Inc. JBMR Plus published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research.
Collapse
Affiliation(s)
- John H Page
- Center for Observational Research, Amgen Inc.Thousand OaksCAUSA
| | - Franklin G Moser
- Department of ImagingCedars‐Sinai Medical CenterLos AngelesCAUSA
| | - Marcel M Maya
- Department of ImagingCedars‐Sinai Medical CenterLos AngelesCAUSA
| | - Ravi Prasad
- Department of ImagingCedars‐Sinai Medical CenterLos AngelesCAUSA
| | - Barry D Pressman
- Department of ImagingCedars‐Sinai Medical CenterLos AngelesCAUSA
| |
Collapse
|
46
|
Lin ZW, Dai WL, Lai QQ, Wu H. Deep learning-based computed tomography applied to the diagnosis of rib fractures. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2023. [DOI: 10.1016/j.jrras.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
|
47
|
Liu B, Jin Y, Feng S, Yu H, Zhang Y, Li Y. Benign vs malignant vertebral compression fractures with MRI: a comparison between automatic deep learning network and radiologist's assessment. Eur Radiol 2023:10.1007/s00330-023-09713-x. [PMID: 37162531 DOI: 10.1007/s00330-023-09713-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/24/2023] [Accepted: 04/19/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVE To test the diagnostic performance of a deep-learning Two-Stream Compare and Contrast Network (TSCCN) model for differentiating benign and malignant vertebral compression fractures (VCFs) based on MRI. METHODS We tested a deep-learning system in 123 benign and 86 malignant VCFs. The median sagittal T1-weighted images (T1WI), T2-weighted images with fat suppression (T2WI-FS), and a combination of both (thereafter, T1WI/T2WI-FS) were used to validate TSCCN. The receiver operator characteristic (ROC) curve was analyzed to evaluate the performance of TSCCN. The accuracy, sensitivity, and specificity of TSCCN in differentiating benign and malignant VCFs were calculated and compared with radiologists' assessments. Intraclass correlation coefficients (ICCs) were tested to find intra- and inter-observer agreement of radiologists in differentiating malignant from benign VCFs. RESULTS The AUC of the ROC plots of TSCCN according to T1WI, T2WI-FS, and T1WI/T2WI-FS images were 99.2%, 91.7%, and 98.2%, respectively. The accuracy of T1W, T2WI-FS, and T1W/T2WI-FS based on TSCCN was 95.2%, 90.4%, and 96.2%, respectively, greater than that achieved by radiologists. Further, the specificity of T1W, T2WI-FS, and T1W/T2WI-FS based on TSCCN was higher at 98.4%, 94.3%, and 99.2% than that achieved by radiologists. The intra- and inter-observer agreements of radiologists were 0.79-0.85 and 0.79-0.80 for T1WI, 0.65-0.72 and 0.70-0.74 for T2WI-FS, and 0.83-0.88 and 0.83-0.84 for T1WI/T2WI-FS. CONCLUSION The TSCCN model showed better diagnostic performance than radiologists for automatically identifying benign or malignant VCFs, and is a potentially helpful tool for future clinical application. CLINICAL RELEVANCE STATEMENT TSCCN-assisted MRI has shown superior performance in distinguishing benign and malignant vertebral compression fractures compared to radiologists. This technology has the value to enhance diagnostic accuracy, sensitivity, and specificity. Further integration into clinical practice is required to optimize patient management. KEY POINTS • The Two-Stream Compare and Contrast Network (TSCCN) model showed better diagnostic performance than radiologists for identifying benign vs malignant vertebral compression fractures. • The processing of TSCCN is fast and stable, better than the subjective evaluation by radiologists in diagnosing vertebral compression fractures. • The TSCCN model provides options for developing a fully automated, streamlined artificial intelligence diagnostic tool.
Collapse
Affiliation(s)
- Beibei Liu
- Institute of Diagnostic and Interventional Radiology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, #600, Yishan Rd, Shanghai, 200233, China
| | - Yuchen Jin
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Shixiang Feng
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Haoyan Yu
- Institute of Diagnostic and Interventional Radiology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, #600, Yishan Rd, Shanghai, 200233, China
| | - Ya Zhang
- Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yuehua Li
- Institute of Diagnostic and Interventional Radiology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, #600, Yishan Rd, Shanghai, 200233, China.
| |
Collapse
|
48
|
Dimai HP. New Horizons: Artificial Intelligence Tools for Managing Osteoporosis. J Clin Endocrinol Metab 2023; 108:775-783. [PMID: 36477337 PMCID: PMC9999362 DOI: 10.1210/clinem/dgac702] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/13/2022]
Abstract
Osteoporosis is a disease characterized by low bone mass and microarchitectural deterioration leading to increased bone fragility and fracture risk. Typically, osteoporotic fractures occur at the spine, hip, distal forearm, and proximal humerus, but other skeletal sites may be affected as well. One of the major challenges in the management of osteoporosis lies in the fact that although the operational diagnosis is based on bone mineral density (BMD) as measured by dual x-ray absorptiometry, the majority of fractures occur at nonosteoporotic BMD values. Furthermore, osteoporosis often remains undiagnosed regardless of the low severity of the underlying trauma. Also, there is only weak consensus among the major guidelines worldwide, when to treat, whom to treat, and which drug to use. Against this background, increasing efforts have been undertaken in the past few years by artificial intelligence (AI) developers to support and improve the management of this disease. The performance of many of these newly developed AI algorithms have been shown to be at least comparable to that of physician experts, or even superior. However, even if study results appear promising at a first glance, they should always be interpreted with caution. Use of inadequate reference standards or selection of variables that are of little or no value in clinical practice are limitations not infrequently found. Consequently, there is a clear need for high-quality clinical research in this field of AI. This could, eg, be achieved by establishing an internationally consented "best practice framework" that considers all relevant stakeholders.
Collapse
Affiliation(s)
- Hans Peter Dimai
- Correspondence: Hans Peter Dimai, MD, Division of Endocrinology and Diabetology, Department of Internal Medicine, Medical University of Graz, Auenbruggerplatz 15, A-8036 Graz, Austria.
| |
Collapse
|
49
|
Zhang S, Zhao Z, Qiu L, Liang D, Wang K, Xu J, Zhao J, Sun J. Automatic vertebral fracture and three-column injury diagnosis with fracture visualization by a multi-scale attention-guided network. Med Biol Eng Comput 2023:10.1007/s11517-023-02805-2. [PMID: 36848011 DOI: 10.1007/s11517-023-02805-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 02/08/2023] [Indexed: 03/01/2023]
Abstract
Deep learning methods have the potential to improve the efficiency of diagnosis for vertebral fractures with computed tomography (CT) images. Most existing intelligent vertebral fracture diagnosis methods only provide dichotomized results at a patient level. However, a fine-grained and more nuanced outcome is clinically needed. This study proposed a novel network, a multi-scale attention-guided network (MAGNet), to diagnose vertebral fractures and three-column injuries with fracture visualization at a vertebra level. By imposing attention constraints through a disease attention map (DAM), a fusion of multi-scale spatial attention maps, the MAGNet can get task highly relevant features and localize fractures. A total of 989 vertebrae were studied here. After four-fold cross-validation, the area under the ROC curve (AUC) of our model for vertebral fracture dichotomized diagnosis and three-column injury diagnosis was 0.884 ± 0.015 and 0.920 ± 0.104, respectively. The overall performance of our model outperformed classical classification models, attention models, visual explanation methods, and attention-guided methods based on class activation mapping. Our work can promote the clinical application of deep learning to diagnose vertebral fractures and provide a way to visualize and improve the diagnosis results with attention constraints.
Collapse
Affiliation(s)
- Shunan Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ziqi Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Lu Qiu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Duan Liang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Kun Wang
- Renji Hospital, Shanghai, 200127, China
| | - Jun Xu
- Shanghai Sixth People's Hospital, Shanghai, 200233, China.
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
50
|
Chamberlin JH, Smith C, Schoepf UJ, Nance S, Elojeimy S, O'Doherty J, Baruah D, Burt JR, Varga-Szemes A, Kabakus IM. A deep convolutional neural network ensemble for composite identification of pulmonary nodules and incidental findings on routine PET/CT. Clin Radiol 2023; 78:e368-e376. [PMID: 36863883 DOI: 10.1016/j.crad.2023.01.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 10/19/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023]
Abstract
AIM To evaluate primary and secondary pathologies of interest using an artificial intelligence (AI) platform, AI-Rad Companion, on low-dose computed tomography (CT) series from integrated positron-emission tomography (PET)/CT to detect CT findings that might be overlooked. MATERIALS AND METHODS One hundred and eighty-nine sequential patients who had undergone PET/CT were included. Images were evaluated using an ensemble of convolutional neural networks (AI-Rad Companion, Siemens Healthineers, Erlangen, Germany). The primary outcome was detection of pulmonary nodules for which the accuracy, identity, and intra-rater reliability was calculated. For secondary outcomes (binary detection of coronary artery calcium, aortic ectasia, vertebral height loss), accuracy and diagnostic performance were calculated. RESULTS The overall per-nodule accuracy for detection of lung nodules was 0.847. The overall sensitivity and specificity for detection of lung nodules was 0.915 and 0.781. The overall per-patient accuracy for AI detection of coronary artery calcium, aortic ectasia, and vertebral height loss was 0.979, 0.966, and 0.840, respectively. The sensitivity and specificity for coronary artery calcium was 0.989 and 0.969. The sensitivity and specificity for aortic ectasia was 0.806 and 1. CONCLUSION The neural network ensemble accurately assessed the number of pulmonary nodules and presence of coronary artery calcium and aortic ectasia on low-dose CT series of PET/CT. The neural network was highly specific for the diagnosis of vertebral height loss, but not sensitive. The use of the AI ensemble can help radiologists and nuclear medicine physicians to catch CT findings that might be overlooked.
Collapse
Affiliation(s)
- J H Chamberlin
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - C Smith
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - U J Schoepf
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - S Nance
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - S Elojeimy
- Division of Nuclear Medicine, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - J O'Doherty
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA; Siemens Medical Solutions, Malvern, PA, USA
| | - D Baruah
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - J R Burt
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - A Varga-Szemes
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - I M Kabakus
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA; Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA; Division of Nuclear Medicine, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA.
| |
Collapse
|