1
|
Keller M, Rohner M, Honigmann P. The potential benefit of artificial intelligence regarding clinical decision-making in the treatment of wrist trauma patients. J Orthop Surg Res 2024; 19:579. [PMID: 39294720 PMCID: PMC11411868 DOI: 10.1186/s13018-024-05063-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Accepted: 09/07/2024] [Indexed: 09/21/2024] Open
Abstract
PURPOSE The implementation of artificial intelligence (AI) in health care is gaining popularity. Many publications describe powerful AI-enabled algorithms. Yet there's only scarce evidence for measurable value in terms of patient outcomes, clinical decision-making or socio-economic impact. Our aim was to investigate the significance of AI in the emergency treatment of wrist trauma patients. METHOD Two groups of physicians were confronted with twenty realistic cases of wrist trauma patients and had to find the correct diagnosis and provide a treatment recommendation. One group was assisted by an AI-enabled application which detects and localizes distal radius fractures (DRF) with near-to-perfect precision while the other group had no help. Primary outcome measurement was diagnostic accuracy. Secondary outcome measurements were required time, number of added CT scans and senior consultations, correctness of the treatment, subjective and objective stress levels. RESULTS The AI-supported group was able to make a diagnosis without support (no additional CT, no senior consultation) in significantly more of the cases than the control group (75% vs. 52%, p = 0.003). The AI-enhanced group detected DRF with superior sensitivity (1.00 vs. 0.96, p = 0.06) and specificity (0.99 vs. 0.93, p = 0.17), used significantly less additional CT scans to reach the correct diagnosis (14% vs. 28%, p = 0.02) and was subjectively significantly less stressed (p = 0.05). CONCLUSION The results indicate that physicians can diagnose wrist trauma more accurately and faster when aided by an AI-tool that lessens the need for extra diagnostic procedures. The AI-tool also seems to lower physicians' stress levels while examining cases. We anticipate that these benefits will be amplified in larger studies as skepticism towards the new technology diminishes.
Collapse
Affiliation(s)
- Marco Keller
- Hand and Peripheral Nerve Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland (Bruderholz, Liestal, Laufen), Bruderholz, Switzerland.
- Medical Additive Manufacturing Research Group (MAM), Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland.
- Hand and Peripheral Nerve Surgery, Department of Orthopaedic Surgery, Traumatology and Hand Surgery, Spital Limmattal, Schlieren, Switzerland.
| | - Meret Rohner
- Hand and Peripheral Nerve Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland (Bruderholz, Liestal, Laufen), Bruderholz, Switzerland
- Medical Additive Manufacturing Research Group (MAM), Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
- Medical Faculty, University of Basel, Basel, Switzerland
| | - Philipp Honigmann
- Hand and Peripheral Nerve Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland (Bruderholz, Liestal, Laufen), Bruderholz, Switzerland
- Medical Additive Manufacturing Research Group (MAM), Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
- Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Lee SH, Jeon J, Lee GJ, Park JY, Kim YJ, Kim KG. Automated Association for Osteosynthesis Foundation and Orthopedic Trauma Association classification of pelvic fractures on pelvic radiographs using deep learning. Sci Rep 2024; 14:20548. [PMID: 39232189 PMCID: PMC11374898 DOI: 10.1038/s41598-024-71654-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 08/29/2024] [Indexed: 09/06/2024] Open
Abstract
High-energy impacts, like vehicle crashes or falls, can lead to pelvic ring injuries. Rapid diagnosis and treatment are crucial due to the risks of severe bleeding and organ damage. Pelvic radiography promptly assesses fracture extent and location, but struggles to diagnose bleeding. The AO/OTA classification system grades pelvic instability, but its complexity limits its use in emergency settings. This study develops and evaluates a deep learning algorithm to classify pelvic fractures on radiographs per the AO/OTA system. Pelvic radiographs of 773 patients with pelvic fractures and 167 patients without pelvic fractures were retrospectively analyzed at a single center. Pelvic fractures were classified into types A, B, and C using medical records categorized by an orthopedic surgeon according to the AO/OTA classification system. Accuracy, Dice Similarity Coefficient (DSC), and F1 score were measured to evaluate the diagnostic performance of the deep learning algorithms. The segmentation model showed high performance with 0.98 accuracy and 0.96-0.97 DSC. The AO/OTA classification model demonstrated effective performance with a 0.47-0.80 F1 score and 0.69-0.88 accuracy. Additionally, the classification model had a macro average of 0.77-0.94. Performance evaluation of the models showed relatively favorable results, which can aid in early classification of pelvic fractures.
Collapse
Affiliation(s)
- Seung Hwan Lee
- Department of Trauma Surgery, Gachon University Gil Medical Center, Incheon, Republic of Korea.
- Department of Traumatology, Gachon University College of Medicine, 38-13, Dokjeom-ro 3beon-gil, Namdong-gu, Incheon, 21565, Republic of Korea.
| | - Jisu Jeon
- Deptartment of Health Science and Technology, Gachon Advanced Institute for Health Science and Technology (GAIHST), Lee Gil Ya Cancer and Diabetes Institute, Gachon University, Incheon, Republic of Korea
| | - Gil Jae Lee
- Department of Trauma Surgery, Gachon University Gil Medical Center, Incheon, Republic of Korea
- Department of Traumatology, Gachon University College of Medicine, 38-13, Dokjeom-ro 3beon-gil, Namdong-gu, Incheon, 21565, Republic of Korea
| | - Jun Young Park
- Deptartment of Health Science and Technology, Gachon Advanced Institute for Health Science and Technology (GAIHST), Lee Gil Ya Cancer and Diabetes Institute, Gachon University, Incheon, Republic of Korea
| | - Young Jae Kim
- Deptartment of Health Science and Technology, Gachon Advanced Institute for Health Science and Technology (GAIHST), Lee Gil Ya Cancer and Diabetes Institute, Gachon University, Incheon, Republic of Korea
- Medical Devices R&D Center, Gachon University Gil Medical Center, Incheon, Republic of Korea
- Deptartment of Biomedical Engineering, Pre-medical Course, Gil Medical Center, College of Medicine, Gachon University, 38-13, Dokjeom-ro 3beon-gil, Namdong-gu, Incheon, 21565, Republic of Korea
| | - Kwang Gi Kim
- Deptartment of Health Science and Technology, Gachon Advanced Institute for Health Science and Technology (GAIHST), Lee Gil Ya Cancer and Diabetes Institute, Gachon University, Incheon, Republic of Korea.
- Medical Devices R&D Center, Gachon University Gil Medical Center, Incheon, Republic of Korea.
- Deptartment of Biomedical Engineering, Pre-medical Course, Gil Medical Center, College of Medicine, Gachon University, 38-13, Dokjeom-ro 3beon-gil, Namdong-gu, Incheon, 21565, Republic of Korea.
| |
Collapse
|
3
|
Wong CR, Zhu A, Baltzer HL. The Accuracy of Artificial Intelligence Models in Hand/Wrist Fracture and Dislocation Diagnosis: A Systematic Review and Meta-Analysis. JBJS Rev 2024; 12:01874474-202409000-00006. [PMID: 39236148 DOI: 10.2106/jbjs.rvw.24.00106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
BACKGROUND Early and accurate diagnosis is critical to preserve function and reduce healthcare costs in patients with hand and wrist injury. As such, artificial intelligence (AI) models have been developed for the purpose of diagnosing fractures through imaging. The purpose of this systematic review and meta-analysis was to determine the accuracy of AI models in identifying hand and wrist fractures and dislocations. METHODS Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Diagnostic Test Accuracy guidelines, Ovid MEDLINE, Embase, and Cochrane Central Register of Controlled Trials were searched from their inception to October 10, 2023. Studies were included if they utilized an AI model (index test) for detecting hand and wrist fractures and dislocations in pediatric (<18 years) or adult (>18 years) patients through any radiologic imaging, with the reference standard established through image review by a medical expert. Results were synthesized through bivariate analysis. Risk of bias was assessed using the QUADAS-2 tool. This study was registered with PROSPERO (CRD42023486475). Certainty of evidence was assessed using Grading of Recommendations Assessment, Development, and Evaluation. RESULTS A systematic review identified 36 studies. Most studies assessed wrist fractures (27.90%) through radiograph imaging (94.44%), with radiologists serving as the reference standard (66.67%). AI models demonstrated area under the curve (0.946), positive likelihood ratio (7.690; 95% confidence interval, 6.400-9.190), and negative likelihood ratio (0.112; 0.0848-0.145) in diagnosing hand and wrist fractures and dislocations. Examining only studies characterized by a low risk of bias, sensitivity analysis did not reveal any difference from the overall results. Overall certainty of evidence was moderate. CONCLUSION In demonstrating the accuracy of AI models in hand and wrist fracture and dislocation diagnosis, we have demonstrated that the potential use of AI in diagnosing hand and wrist fractures is promising. LEVEL OF EVIDENCE Level III. See Instructions for Authors for a complete description of levels of evidence.
Collapse
Affiliation(s)
- Chloe R Wong
- Division of Plastic, Reconstructive & Aesthetic Surgery, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Alice Zhu
- Division of General Surgery, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Heather L Baltzer
- Division of Plastic, Reconstructive & Aesthetic Surgery, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Suojärvi N, Waris E. Radiographic measurements in distal radius fracture evaluation: a review of current techniques and a recommendation for standardization. Acta Radiol 2024; 65:1065-1079. [PMID: 39043232 DOI: 10.1177/02841851241266369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
Radiographic measurements play a crucial role in evaluating the alignment of distal radius fractures (DRFs). Various manual methods have been used to perform the measurements, but they are susceptible to inaccuracies. Recently, computer-aided methods have become available. This review explores the methods commonly used to assess DRFs. The review introduces the different measurement techniques, discusses the sources of measurement errors and measurement reliability, and provides a recommendation for their use. Radiographic measurements used in the evaluation of DRFs are not reliable. Standardizing the measurement techniques is crucial to address this and automated image analysis could help improve accuracy and reliability.
Collapse
Affiliation(s)
- Nora Suojärvi
- Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Eero Waris
- Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
5
|
Xu D, Li B, Liu W, Wei D, Long X, Huang T, Lin H, Cao K, Zhong S, Shao J, Huang B, Diao XF, Gao Z. Deep learning-based detection of primary bone tumors around the knee joint on radiographs: a multicenter study. Quant Imaging Med Surg 2024; 14:5420-5433. [PMID: 39144039 PMCID: PMC11320541 DOI: 10.21037/qims-23-1743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/30/2024] [Indexed: 08/16/2024]
Abstract
Background Most primary bone tumors are often found in the bone around the knee joint. However, the detection of primary bone tumors on radiographs can be challenging for the inexperienced or junior radiologist. This study aimed to develop a deep learning (DL) model for the detection of primary bone tumors around the knee joint on radiographs. Methods From four tertiary referral centers, we recruited 687 patients diagnosed with bone tumors (including osteosarcoma, chondrosarcoma, giant cell tumor of bone, bone cyst, enchondroma, fibrous dysplasia, etc.; 417 males, 270 females; mean age 22.8±13.2 years) by postoperative pathology or clinical imaging/follow-up, and 1,988 participants with normal bone radiographs (1,152 males, 836 females; mean age 27.9±12.2 years). The dataset was split into a training set for model development, an internal independent and an external test set for model validation. The trained model located bone tumor lesions and then detected tumor patients. Receiver operating characteristic curves and Cohen's kappa coefficient were used for evaluating detection performance. We compared the model's detection performance with that of two junior radiologists in the internal test set using permutation tests. Results The DL model correctly localized 94.5% and 92.9% bone tumors on radiographs in the internal and external test set, respectively. An accuracy of 0.964/0.920, and an area under the receiver operating characteristic curve (AUC) of 0.981/0.990 in DL detection of bone tumor patients were for the internal and external test set, respectively. Cohen's kappa coefficient of the model in the internal test set was significantly higher than that of the two junior radiologists with 4 and 3 years of experience in musculoskeletal radiology (Model vs. Reader A, 0.927 vs. 0.777, P<0.001; Model vs. Reader B, 0.927 vs. 0.841, P=0.033). Conclusions The DL model achieved good performance in detecting primary bone tumors around the knee joint. This model had better performance than those of junior radiologists, indicating the potential for the detection of bone tumors on radiographs.
Collapse
Affiliation(s)
- Danyang Xu
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bing Li
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Weixiang Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Medical School, Shenzhen University, Shenzhen, China
| | - Dan Wei
- Department of Radiology, Huiya Hospital of The First Affiliated Hospital, Sun Yat-sen University, Huizhou, China
| | - Xiaowu Long
- Department of Radiology, Yunfu People’s Hospital, Yunfu, China
| | - Tanyu Huang
- Department of Radiology, The Second People’s Hospital of Huizhou, Huizhou, China
| | - Hongxin Lin
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Kangyang Cao
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Shaonan Zhong
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Jingjing Shao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Health Science Centre, Shenzhen University, Shenzhen, China
| | - Xian-Fen Diao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Medical School, Shenzhen University, Shenzhen, China
| | - Zhenhua Gao
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Radiology, Huiya Hospital of The First Affiliated Hospital, Sun Yat-sen University, Huizhou, China
| |
Collapse
|
6
|
Oude Nijhuis KD, Dankelman LHM, Wiersma JP, Barvelink B, IJpma FFA, Verhofstad MHJ, Doornberg JN, Colaris JW, Wijffels MME. AI for detection, classification and prediction of loss of alignment of distal radius fractures; a systematic review. Eur J Trauma Emerg Surg 2024:10.1007/s00068-024-02557-0. [PMID: 38981869 DOI: 10.1007/s00068-024-02557-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 05/14/2024] [Indexed: 07/11/2024]
Abstract
PURPOSE Early and accurate assessment of distal radius fractures (DRFs) is crucial for optimal prognosis. Identifying fractures likely to lose threshold alignment (instability) in a cast is vital for treatment decisions, yet prediction tools' accuracy and reliability remain challenging. Artificial intelligence (AI), particularly Convolutional Neural Networks (CNNs), can evaluate radiographic images with high performance. This systematic review aims to summarize studies utilizing CNNs to detect, classify, or predict loss of threshold alignment of DRFs. METHODS A literature search was performed according to the PRISMA. Studies were eligible when the use of AI for the detection, classification, or prediction of loss of threshold alignment was analyzed. Quality assessment was done with a modified version of the methodologic index for non-randomized studies (MINORS). RESULTS Of the 576 identified studies, 15 were included. On fracture detection, studies reported sensitivity and specificity ranging from 80 to 99% and 73-100%, respectively; the AUC ranged from 0.87 to 0.99; the accuracy varied from 82 to 99%. The accuracy of fracture classification ranged from 60 to 81% and the AUC from 0.59 to 0.84. No studies focused on predicting loss of thresholds alignement of DRFs. CONCLUSION AI models for DRF detection show promising performance, indicating the potential of algorithms to assist clinicians in the assessment of radiographs. In addition, AI models showed similar performance compared to clinicians. No algorithms for predicting the loss of threshold alignment were identified in our literature search despite the clinical relevance of such algorithms.
Collapse
Affiliation(s)
- Koen D Oude Nijhuis
- Department of Orthopedic Surgery, Groningen, Groningen University Medical Centre, Groningen, The Netherlands.
- Department of Surgery, Groningen, University Medical Centre, Groningen, The Netherlands.
| | - Lente H M Dankelman
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, Rotterdam, 3000 CA, The Netherlands.
- Department of Orthopedic Surgery, Hand and Arm Center, Massachusetts General Hospital, Boston MA, Harvard Medical School, Boston MA, The Netherlands.
| | - Jort P Wiersma
- Department of Orthopedic Surgery, Groningen, Groningen University Medical Centre, Groningen, The Netherlands
- University Medical Center, Utrecht, The Netherlands
| | - Britt Barvelink
- Department of Orthopedics and Sports Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Frank F A IJpma
- Department of Surgery, Groningen, University Medical Centre, Groningen, The Netherlands
| | - Michael H J Verhofstad
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, Rotterdam, 3000 CA, The Netherlands
| | - Job N Doornberg
- Department of Orthopedic Surgery, Groningen, Groningen University Medical Centre, Groningen, The Netherlands
- Department of Surgery, Groningen, University Medical Centre, Groningen, The Netherlands
- Department of Orthopaedic and Trauma Surgery, Flinders University and Flinders Medical Centre, Adelaide, Australia
| | - Joost W Colaris
- Department of Orthopedics and Sports Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Mathieu M E Wijffels
- Trauma Research Unit Department of Surgery, Erasmus MC, University Medical Center Rotterdam, P.O. Box 2040, Rotterdam, 3000 CA, The Netherlands
| |
Collapse
|
7
|
Hansen V, Jensen J, Kusk MW, Gerke O, Tromborg HB, Lysdahlgaard S. Deep learning performance compared to healthcare experts in detecting wrist fractures from radiographs: A systematic review and meta-analysis. Eur J Radiol 2024; 174:111399. [PMID: 38428318 DOI: 10.1016/j.ejrad.2024.111399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/29/2024] [Accepted: 02/26/2024] [Indexed: 03/03/2024]
Abstract
OBJECTIVE To perform a systematic review and meta-analysis of the diagnostic accuracy of deep learning (DL) algorithms in the diagnosis of wrist fractures (WF) on plain wrist radiographs, taking healthcare experts consensus as reference standard. METHODS Embase, Medline, PubMed, Scopus and Web of Science were searched in the period from 1 Jan 2012 to 9 March 2023. Eligible studies were patients with wrist radiographs for radial and ulnar fractures as the target condition, studies using DL algorithms based on convolutional neural networks (CNN), and healthcare experts consensus as the minimum reference standard. Studies were assessed with a modified QUADAS-2 tool, and we applied a bivariate random-effects model for meta-analysis of diagnostic test accuracy data. RESULTS Our study was registered at PROSPERO with ID: CRD42023431398. We included 6 unique studies for meta-analysis, with a total of 33,026 radiographs. CNN performance compared to reference standards for the included articles found a summary sensitivity of 92% (95% CI: 80%-97%) and a summary specificity of 93% (95% CI: 76%-98%). The generalized bivariate I-squared statistic indicated considerable heterogeneity between the studies (81.90%). Four studies had one or more domains at high risk of bias and two studies had concerns regarding applicability. CONCLUSION The diagnostic accuracy of CNNs was comparable to that of healthcare experts in wrist radiographs for investigation of WF. There is a need for studies with a robust reference standard, external data-set validation and investigation of diagnostic performance of healthcare experts aided with CNNs. CLINICAL RELEVANCE STATEMENT DL matches healthcare experts in diagnosing WFs, which potentially benefits patient diagnosis.
Collapse
Affiliation(s)
- V Hansen
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark
| | - J Jensen
- Department of Radiology, Odense University Hospital, Odense, Denmark; Research and Innovation Unit of Radiology, University of Southern Denmark, Odense, Denmark
| | - M W Kusk
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Imaging Research Initiative Southwest (IRIS), Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Radiography and Diagnostic Imaging, School of Medicine, University College Dublin, Belfield 4, Dublin, Ireland
| | - O Gerke
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - H B Tromborg
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Orthopedic Surgery, Odense University Hospital, Odense, Denmark
| | - S Lysdahlgaard
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark; Imaging Research Initiative Southwest (IRIS), Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark.
| |
Collapse
|
8
|
Mert S, Stoerzer P, Brauer J, Fuchs B, Haas-Lützenberger EM, Demmer W, Giunta RE, Nuernberger T. Diagnostic power of ChatGPT 4 in distal radius fracture detection through wrist radiographs. Arch Orthop Trauma Surg 2024; 144:2461-2467. [PMID: 38578309 PMCID: PMC11093861 DOI: 10.1007/s00402-024-05298-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 03/27/2024] [Indexed: 04/06/2024]
Abstract
Distal radius fractures rank among the most prevalent fractures in humans, necessitating accurate radiological imaging and interpretation for optimal diagnosis and treatment. In addition to human radiologists, artificial intelligence systems are increasingly employed for radiological assessments. Since 2023, ChatGPT 4 has offered image analysis capabilities, which can also be used for the analysis of wrist radiographs. This study evaluates the diagnostic power of ChatGPT 4 in identifying distal radius fractures, comparing it with a board-certified radiologist, a hand surgery resident, a medical student, and the well-established AI Gleamer BoneView™. Results demonstrate ChatGPT 4's good diagnostic accuracy (sensitivity 0.88, specificity 0.98, diagnostic power (AUC) 0.93), surpassing the medical student (sensitivity 0.98, specificity 0.72, diagnostic power (AUC) 0.85; p = 0.04) significantly. Nevertheless, the diagnostic power of ChatGPT 4 lags behind the hand surgery resident (sensitivity 0.99, specificity 0.98, diagnostic power (AUC) 0.985; p = 0.014) and Gleamer BoneView™(sensitivity 1.00, specificity 0.98, diagnostic power (AUC) 0.99; p = 0.006). This study highlights the utility and potential applications of artificial intelligence in modern medicine, emphasizing ChatGPT 4 as a valuable tool for enhancing diagnostic capabilities in the field of medical imaging.
Collapse
Affiliation(s)
- Sinan Mert
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany.
| | - Patrick Stoerzer
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany
| | - Johannes Brauer
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany
| | - Benedikt Fuchs
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany
| | | | - Wolfram Demmer
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany
| | - Riccardo E Giunta
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany
| | - Tim Nuernberger
- Division of Hand, Plastic and Aesthetic Surgery, LMU University Hospital, LMU Munich, 80336, München, Germany
| |
Collapse
|
9
|
Adleberg J, Benitez CL, Primiano N, Patel A, Mogel D, Kalra R, Adhia A, Berns M, Chin C, Tanghe S, Yi P, Zech J, Kohli A, Martin-Carreras T, Corcuera-Solano I, Huang M, Ngeow J. Fully Automated Measurement of the Insall-Salvati Ratio with Artificial Intelligence. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:601-610. [PMID: 38343226 PMCID: PMC11031523 DOI: 10.1007/s10278-023-00955-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 09/17/2023] [Accepted: 09/19/2023] [Indexed: 04/20/2024]
Abstract
Patella alta (PA) and patella baja (PB) affect 1-2% of the world population, but are often underreported, leading to potential complications like osteoarthritis. The Insall-Salvati ratio (ISR) is commonly used to diagnose patellar height abnormalities. Artificial intelligence (AI) keypoint models show promising accuracy in measuring and detecting these abnormalities.An AI keypoint model is developed and validated to study the Insall-Salvati ratio on a random population sample of lateral knee radiographs. A keypoint model was trained and internally validated with 689 lateral knee radiographs from five sites in a multi-hospital urban healthcare system after IRB approval. A total of 116 lateral knee radiographs from a sixth site were used for external validation. Distance error (mm), Pearson correlation, and Bland-Altman plots were used to evaluate model performance. On a random sample of 2647 different lateral knee radiographs, mean and standard deviation were used to calculate the normal distribution of ISR. A keypoint detection model had mean distance error of 2.57 ± 2.44 mm on internal validation data and 2.73 ± 2.86 mm on external validation data. Pearson correlation between labeled and predicted Insall-Salvati ratios was 0.82 [95% CI 0.76-0.86] on internal validation and 0.75 [0.66-0.82] on external validation. For the population sample of 2647 patients, there was mean ISR of 1.11 ± 0.21. Patellar height abnormalities were underreported in radiology reports from the population sample. AI keypoint models consistently measure ISR on knee radiographs. Future models can enable radiologists to study musculoskeletal measurements on larger population samples and enhance our understanding of normal and abnormal ranges.
Collapse
Affiliation(s)
- J Adleberg
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - C L Benitez
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - N Primiano
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - A Patel
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - D Mogel
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - R Kalra
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - A Adhia
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - M Berns
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - C Chin
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - S Tanghe
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - P Yi
- University of Maryland, Baltimore, MD, USA
| | - J Zech
- Columbia University Medical Center, New York, NY, USA
| | - A Kohli
- UT Southwestern, Dallas, TX, USA
| | | | - I Corcuera-Solano
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - M Huang
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - J Ngeow
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
10
|
Bhatnagar A, Kekatpure AL, Velagala VR, Kekatpure A. A Review on the Use of Artificial Intelligence in Fracture Detection. Cureus 2024; 16:e58364. [PMID: 38756254 PMCID: PMC11097122 DOI: 10.7759/cureus.58364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 04/16/2024] [Indexed: 05/18/2024] Open
Abstract
Artificial intelligence (AI) simulates intelligent behavior using computers with minimum human intervention. Recent advances in AI, especially deep learning, have made significant progress in perceptual operations, enabling computers to convey and comprehend complicated input more accurately. Worldwide, fractures affect people of all ages and in all regions of the planet. One of the most prevalent causes of inaccurate diagnosis and medical lawsuits is overlooked fractures on radiographs taken in the emergency room, which can range from 2% to 9%. The workforce will soon be under a great deal of strain due to the growing demand for fracture detection on multiple imaging modalities. A dearth of radiologists worsens this rise in demand as a result of a delay in hiring and a significant percentage of radiologists close to retirement. Additionally, the process of interpreting diagnostic images can sometimes be challenging and tedious. Integrating orthopedic radio-diagnosis with AI presents a promising solution to these problems. There has recently been a noticeable rise in the application of deep learning techniques, namely convolutional neural networks (CNNs), in medical imaging. In the field of orthopedic trauma, CNNs are being documented to operate at the proficiency of expert orthopedic surgeons and radiologists in the identification and categorization of fractures. CNNs can analyze vast amounts of data at a rate that surpasses that of human observations. In this review, we discuss the use of deep learning methods in fracture detection and classification, the integration of AI with various imaging modalities, and the benefits and disadvantages of integrating AI with radio-diagnostics.
Collapse
Affiliation(s)
- Aayushi Bhatnagar
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aditya L Kekatpure
- Orthopedic Surgery, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Vivek R Velagala
- Medicine, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Aashay Kekatpure
- Orthopedic Surgery, Narendra Kumar Prasadrao Salve Institute of Medical Sciences and Research, Nagpur, IND
| |
Collapse
|
11
|
Fink A, Tran H, Reisert M, Rau A, Bayer J, Kotter E, Bamberg F, Russe MF. A deep learning approach for projection and body-side classification in musculoskeletal radiographs. Eur Radiol Exp 2024; 8:23. [PMID: 38353812 PMCID: PMC10866807 DOI: 10.1186/s41747-023-00417-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/29/2023] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND The growing prevalence of musculoskeletal diseases increases radiologic workload, highlighting the need for optimized workflow management and automated metadata classification systems. We developed a large-scale, well-characterized dataset of musculoskeletal radiographs and trained deep learning neural networks to classify radiographic projection and body side. METHODS In this IRB-approved retrospective single-center study, a dataset of musculoskeletal radiographs from 2011 to 2019 was retrieved and manually labeled for one of 45 possible radiographic projections and the depicted body side. Two classification networks were trained for the respective tasks using the Xception architecture with a custom network top and pretrained weights. Performance was evaluated on a hold-out test sample, and gradient-weighted class activation mapping (Grad-CAM) heatmaps were computed to visualize the influential image regions for network predictions. RESULTS A total of 13,098 studies comprising 23,663 radiographs were included with a patient-level dataset split, resulting in 19,183 training, 2,145 validation, and 2,335 test images. Focusing on paired body regions, training for side detection included 16,319 radiographs (13,284 training, 1,443 validation, and 1,592 test images). The models achieved an overall accuracy of 0.975 for projection and 0.976 for body-side classification on the respective hold-out test sample. Errors were primarily observed in projections with seamless anatomical transitions or non-orthograde adjustment techniques. CONCLUSIONS The deep learning neural networks demonstrated excellent performance in classifying radiographic projection and body side across a wide range of musculoskeletal radiographs. These networks have the potential to serve as presorting algorithms, optimizing radiologic workflow and enhancing patient care. RELEVANCE STATEMENT The developed networks excel at classifying musculoskeletal radiographs, providing valuable tools for research data extraction, standardized image sorting, and minimizing misclassifications in artificial intelligence systems, ultimately enhancing radiology workflow efficiency and patient care. KEY POINTS • A large-scale, well-characterized dataset was developed, covering a broad spectrum of musculoskeletal radiographs. • Deep learning neural networks achieved high accuracy in classifying radiographic projection and body side. • Grad-CAM heatmaps provided insight into network decisions, contributing to their interpretability and trustworthiness. • The trained models can help optimize radiologic workflow and manage large amounts of data.
Collapse
Affiliation(s)
- Anna Fink
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany.
| | - Hien Tran
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Marco Reisert
- Department of Stereotactic and Functional Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Medical Physics, Department of Diagnostic and Interventional Radiology, Medical Center, University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Alexander Rau
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
- Department of Neuroradiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jörg Bayer
- Department of Trauma and Orthopaedic Surgery, Schwarzwald-Baar Hospital, Villingen-Schwenningen, Germany
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| | - Maximilian F Russe
- Department of Diagnostic and Interventional Radiology, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106, Freiburg, Germany
| |
Collapse
|
12
|
Suna A, Davidson A, Weil Y, Joskowicz L. Automated computation of radiographic parameters of distal radial metaphyseal fractures in forearm X-rays. Int J Comput Assist Radiol Surg 2023; 18:2179-2189. [PMID: 37097517 DOI: 10.1007/s11548-023-02907-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 04/03/2023] [Indexed: 04/26/2023]
Abstract
PURPOSE Radiographic parameters (RPs) provide objective support for effective decision making in determining clinical treatment of distal radius fractures (DRFs). This paper presents a novel automatic RP computation pipeline for computing the six anatomical RPs associated with DRFs in anteroposterior (AP) and lateral (LAT) forearm radiographs. METHODS The pipeline consists of: (1) segmentation of the distal radius and ulna bones with six 2D Dynamic U-Net deep learning models; (2) landmark points detection and distal radius axis computation from the segmentations with geometric methods; (3) RP computation and generation of a quantitative DRF report and composite AP and LAT radiograph images. This hybrid approach combines the advantages of deep learning and model-based methods. RESULTS The pipeline was evaluated on 90 AP and 93 LAT radiographs for which ground truth distal radius and ulna segmentations and RP landmarks were manually obtained by expert clinicians. It achieves an accuracy of 94 and 86% on the AP and LAT RPs, within the observer variability, and an RP measurement difference of 1.4 ± 1.2° for the radial angle, 0.5 ± 0.6 mm for the radial length, 0.9 ± 0.7 mm for the radial shift, 0.7 ± 0.5 mm for the ulnar variance, 2.9 ± 3.3° for the palmar tilt and 1.2 ± 1.0 mm for the dorsal shift. CONCLUSION Our pipeline is the first fully automatic method that accurately and robustly computes the RPs for a wide variety of clinical forearm radiographs from different sources, hand orientations, with and without cast. The computed accurate and reliable RF measurements may support fracture severity assessment and clinical management.
Collapse
Affiliation(s)
- Avigail Suna
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel, Edmond J. Safra Campus, Givat Ram, 9190401, Jerusalem, Israel
| | - Amit Davidson
- Department of Orthopedics, Hadassah University Medical Center, Jerusalem, Israel
| | - Yoram Weil
- Department of Orthopedics, Hadassah University Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel, Edmond J. Safra Campus, Givat Ram, 9190401, Jerusalem, Israel.
| |
Collapse
|
13
|
Goh CXY, Ho FCH. The Growing Problem of Radiologist Shortages: Perspectives From Singapore. Korean J Radiol 2023; 24:1176-1178. [PMID: 38016677 PMCID: PMC10700991 DOI: 10.3348/kjr.2023.0966] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 10/03/2023] [Indexed: 11/30/2023] Open
Affiliation(s)
- Charles Xian Yang Goh
- Department of Nuclear Medicine and Molecular Imaging, Singapore General Hospital, Singapore.
| | - Francis Cho Hao Ho
- Department of Radiation Oncology, National University Cancer Institute, Singapore
| |
Collapse
|
14
|
Aryasomayajula S, Hing CB, Siebachmeyer M, Naeini FB, Ejindu V, Leitch P, Gelfer Y, Zweiri Y. Developing an artificial intelligence diagnostic tool for paediatric distal radius fractures, a proof of concept study. Ann R Coll Surg Engl 2023; 105:721-728. [PMID: 37642151 PMCID: PMC10618045 DOI: 10.1308/rcsann.2023.0017] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023] Open
Abstract
INTRODUCTION In the UK, 1 in 50 children sustain a fractured bone yearly, yet studies have shown that 34% of children sustaining an injury do not have a visible fracture on initial radiographs. Wrist fractures are particularly difficult to identify because the growth plate poses diagnostic challenges when interpreting radiographs. METHODS We developed Convolutional Neural Network (CNN) image recognition software to detect fractures in radiographs from children. A consecutive data set of 5,000 radiographs of the distal radius in children aged less than 19 years from 2014 to 2019 was used to train the CNN. In addition, transfer learning from a VGG16 CNN pretrained on non-radiological images was applied to improve generalisation of the network and the classification of radiographs. Hyperparameter tuning techniques were used to compare the model with the radiology reports that accompanied the original images to determine diagnostic test accuracy. RESULTS The training set consisted of 2,881 radiographs with a fracture and 1,571 without; 548 radiographs were outliers. With additional augmentation, the final data set consisted of 15,498 images. The data set was randomly split into three subsets: training (70%), validation (10%) and test (20%). After training for 20 epochs, the diagnostic test accuracy was 85%. DISCUSSION A CNN model is feasible in diagnosing paediatric wrist fractures. We demonstrated that this application could be utilised as a tool for improving diagnostic accuracy. Future work would involve developing automated treatment pathways for diagnosis, reducing unnecessary hospital visits and allowing staff redeployment to other areas.
Collapse
Affiliation(s)
| | - C B Hing
- St George's University Hospitals NHS Foundation Trust, UK
| | - M Siebachmeyer
- St George's University Hospitals NHS Foundation Trust, UK
| | | | - V Ejindu
- St George's University Hospitals NHS Foundation Trust, UK
| | - P Leitch
- St George's University London, UK
| | - Y Gelfer
- St George's University Hospitals NHS Foundation Trust, UK
| | | |
Collapse
|
15
|
Su Z, Adam A, Nasrudin MF, Ayob M, Punganan G. Skeletal Fracture Detection with Deep Learning: A Comprehensive Review. Diagnostics (Basel) 2023; 13:3245. [PMID: 37892066 PMCID: PMC10606060 DOI: 10.3390/diagnostics13203245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/29/2023] Open
Abstract
Deep learning models have shown great promise in diagnosing skeletal fractures from X-ray images. However, challenges remain that hinder progress in this field. Firstly, a lack of clear definitions for recognition, classification, detection, and localization tasks hampers the consistent development and comparison of methodologies. The existing reviews often lack technical depth or have limited scope. Additionally, the absence of explainable facilities undermines the clinical application and expert confidence in results. To address these issues, this comprehensive review analyzes and evaluates 40 out of 337 recent papers identified in prestigious databases, including WOS, Scopus, and EI. The objectives of this review are threefold. Firstly, precise definitions are established for the bone fracture recognition, classification, detection, and localization tasks within deep learning. Secondly, each study is summarized based on key aspects such as the bones involved, research objectives, dataset sizes, methods employed, results obtained, and concluding remarks. This process distills the diverse approaches into a generalized processing framework or workflow. Moreover, this review identifies the crucial areas for future research in deep learning models for bone fracture diagnosis. These include enhancing the network interpretability, integrating multimodal clinical information, providing therapeutic schedule recommendations, and developing advanced visualization methods for clinical application. By addressing these challenges, deep learning models can be made more intelligent and specialized in this domain. In conclusion, this review fills the gap in precise task definitions within deep learning for bone fracture diagnosis and provides a comprehensive analysis of the recent research. The findings serve as a foundation for future advancements, enabling improved interpretability, multimodal integration, clinical decision support, and advanced visualization techniques.
Collapse
Affiliation(s)
- Zhihao Su
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia; (Z.S.); (M.F.N.); (M.A.)
| | - Afzan Adam
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia; (Z.S.); (M.F.N.); (M.A.)
| | - Mohammad Faidzul Nasrudin
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia; (Z.S.); (M.F.N.); (M.A.)
| | - Masri Ayob
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia; (Z.S.); (M.F.N.); (M.A.)
| | - Gauthamen Punganan
- Department of Orthopedics and Traumatology, Hospital Raja Permaisuri Bainun, Ipoh 30450, Perak, Malaysia;
| |
Collapse
|
16
|
Beyraghi S, Ghorbani F, Shabanpour J, Lajevardi ME, Nayyeri V, Chen PY, Ramahi OM. Microwave bone fracture diagnosis using deep neural network. Sci Rep 2023; 13:16957. [PMID: 37805642 PMCID: PMC10560237 DOI: 10.1038/s41598-023-44131-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 10/04/2023] [Indexed: 10/09/2023] Open
Abstract
This paper studies the feasibility of a deep neural network (DNN) approach for bone fracture diagnosis based on the non-invasive propagation of radio frequency waves. In contrast to previous "semi-automated" techniques, where X-ray images were used as the network input, in this work, we use S-parameters profiles for DNN training to avoid labeling and data collection problems. Our designed network can simultaneously classify different complex fracture types (normal, transverse, oblique, and comminuted) and estimate the length of the cracks. The proposed system can be used as a portable device in ambulances, retirement houses, and low-income settings for fast preliminary diagnosis in emergency locations when expert radiologists are not available. Using accurate modeling of the human body as well as changing tissue diameters to emulate various anatomical regions, we have created our datasets. Our numerical results show that our design DNN is successfully trained without overfitting. Finally, for the validation of the numerical results, different sets of experiments have been done on the sheep femur bones covered by the liquid phantom. Experimental results demonstrate that fracture types can be correctly classified without using potentially harmful and ionizing X-rays.
Collapse
Affiliation(s)
- Sina Beyraghi
- Department of Information and Communications Technologies, Pompeu Fabra University, Barcelona, Spain
| | - Fardin Ghorbani
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 1684613114, Iran
| | - Javad Shabanpour
- Department of Electronics and Nanoengineering, School of Electrical Engineering, Aalto University, 02150, Espoo, Finland
| | - Mir Emad Lajevardi
- Department of Electrical Engineering, Faculty of Electrical and Electronics, South Tehran Branch, Islamic Azad University, Tehran, 113654435, Iran
| | - Vahid Nayyeri
- School of Advanced Technologies, Iran University of Science and Technology, Tehran, 1684613114, Iran.
| | - Pai-Yen Chen
- Department of Electrical and Computer Engineering, University of Illinois, Chicago, IL, 60607, USA
| | - Omar M Ramahi
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, N2L3G1, Canada
| |
Collapse
|
17
|
Hasan Z, Key S, Lee M, Chen F, Aweidah L, Esmaili A, Sacks R, Singh N. A Deep Learning Algorithm to Identify Anatomical Landmarks on Computed Tomography of the Temporal Bone. J Int Adv Otol 2023; 19:360-367. [PMID: 37789621 PMCID: PMC10645193 DOI: 10.5152/iao.2023.231073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/18/2023] [Indexed: 10/05/2023] Open
Abstract
BACKGROUND Petrous temporal bone cone-beam computed tomography scans help aid diagnosis and accurate identification of key operative landmarks in temporal bone and mastoid surgery. Our primary objective was to determine the accuracy of using a deep learning convolutional neural network algorithm to augment identification of structures on petrous temporal bone cone-beam computed tomography. Our secondary objective was to compare the accuracy of convolutional neural network structure identification when trained by a senior versus junior clinician. METHODS A total of 129 petrous temporal bone cone-beam computed tomography scans were obtained from an Australian public tertiary hospital. Key intraoperative landmarks were labeled in 68 scans using bounding boxes on axial and coronal slices at the level of the malleoincudal joint by an otolaryngology registrar and board-certified otolaryngologist. Automated structure identification was performed on axial and coronal slices of the remaining 61 scans using a convolutional neural network (Microsoft Custom Vision) trained using the labeled dataset. Convolutional neural network structure identification accuracy was manually verified by an otolaryngologist, and accuracy when trained by the registrar and otolaryngologist labeled datasets respectively was compared. RESULTS The convolutional neural network was able to perform automated structure identification in petrous temporal bone cone-beam computed tomography scans with a high degree of accuracy in both axial (0.958) and coronal (0.924) slices (P < .001). Convolutional neural network accuracy was proportionate to the seniority of the training clinician in structures with features more difficult to distinguish on single slices such as the cochlea, vestibule, and carotid canal. CONCLUSION Convolutional neural networks can perform automated structure identification in petrous temporal bone cone-beam computed tomography scans with a high degree of accuracy, with the performance being proportionate to the seniority of the training clinician. Training of the convolutional neural network by the most senior clinician is desirable to maximize the accuracy of the results.
Collapse
Affiliation(s)
- Zubair Hasan
- University of Sydney, Faculty of Medicine and Health, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Seraphina Key
- Monash University, Faculty of Medicine, Nursing and Health Sciences, Victoria, Australia
| | - Michael Lee
- University of Sydney, Faculty of Medicine and Health, New South Wales, Australia
| | - Fiona Chen
- Department of Otolaryngology, Royal Children’s Hospital Melbourne, Melbourne, Australia
| | - Layal Aweidah
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Aaron Esmaili
- Department of Otolaryngology, Sir Charles Gairdner Hospital, Nedlands, Australia
| | - Raymond Sacks
- University of Sydney, Faculty of Medicine and Health, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Concord Hospital, New South Wales, Australia
| | - Narinder Singh
- University of Sydney, Faculty of Medicine and Health, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| |
Collapse
|
18
|
Liu Y, Liu W, Chen H, Xie S, Wang C, Liang T, Yu Y, Liu X. Artificial intelligence versus radiologist in the accuracy of fracture detection based on computed tomography images: a multi-dimensional, multi-region analysis. Quant Imaging Med Surg 2023; 13:6424-6433. [PMID: 37869340 PMCID: PMC10585498 DOI: 10.21037/qims-23-428] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/18/2023] [Indexed: 10/24/2023]
Abstract
Background Extremities fractures are a leading cause of death and disability, especially in the elderly. Avulsion fracture are also the most commonly missed diagnosis, and delayed diagnosis leads to higher litigation rates. Therefore, this study evaluates the diagnostic efficiency of the artificial intelligence (AI) model before and after optimization based on computed tomography (CT) images and then compares it with that of radiologists, especially for avulsion fractures. Methods The digital X-ray photography [digital radiography (DR)] and CT images of adult limb trauma in our hospital from 2017 to 2020 were retrospectively collected, with or without 1 or more fractures of the shoulder, elbow, wrist, hand, hip, knee, ankle, and foot. Labeling of the fracture referred to the visualization of the fracture on the corresponding CT images. After training the pre-optimized AI model, the diagnostic performance of the pre-optimized AI, optimized AI model, and the initial radiological reports were evaluated. For the lesion level, the detection rate of avulsion and non-avulsion fractures was analyzed, whereas for the case level, the accuracy, sensitivity, and specificity were compared among them. Results The total datasets (1,035 cases) were divided into a training set (n=675), a validation set (n=169), and a test set (n=191) in a balanced joint distribution. At the lesion level, the detection rates of avulsion fracture (57.89% vs. 35.09%, P=0.004) and non-avulsion fracture (85.64% vs. 71.29%, P<0.001) by the optimized AI were significantly higher than that by pre-optimized AI. The average precision (AP) of the optimized AI model for all lesions was higher than that of pre-optimized AI model (0.582 vs. 0.425). The detection rate of avulsion fracture by the optimized AI model was significantly higher than that by radiologists (57.89% vs. 29.82%, P=0.002). For the non-avulsion fracture, there was no significant difference of detection rate between the optimized AI model and radiologists (P=0.853). At the case level, the accuracy (86.40% vs. 71.93%, P<0.001) and sensitivity (87.29% vs. 73.48%, P<0.001) of the optimized AI were significantly higher than those of the pre-optimized AI model. There was no statistical difference in accuracy, sensitivity, and specificity between the optimized AI model and the radiologists (P>0.05). Conclusions The optimized AI model improves the diagnostic efficacy in detecting extremity fractures on radiographs, and the optimized AI model is significantly better than radiologists in detecting avulsion fractures, which may be helpful in the clinical practice of orthopedic emergency.
Collapse
Affiliation(s)
- Yunxia Liu
- Department of Radiology, The Third Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Weifang Liu
- Department of Radiology, Civil Aviation General Hospital, Beijing, China
| | | | - Sheng Xie
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Ce Wang
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Tian Liang
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | | | | |
Collapse
|
19
|
Abedeen I, Rahman MA, Prottyasha FZ, Ahmed T, Chowdhury TM, Shatabda S. FracAtlas: A Dataset for Fracture Classification, Localization and Segmentation of Musculoskeletal Radiographs. Sci Data 2023; 10:521. [PMID: 37543626 PMCID: PMC10404222 DOI: 10.1038/s41597-023-02432-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 07/31/2023] [Indexed: 08/07/2023] Open
Abstract
Digital radiography is one of the most common and cost-effective standards for the diagnosis of bone fractures. For such diagnoses expert intervention is required which is time-consuming and demands rigorous training. With the recent growth of computer vision algorithms, there is a surge of interest in computer-aided diagnosis. The development of algorithms demands large datasets with proper annotations. Existing X-Ray datasets are either small or lack proper annotation, which hinders the development of machine-learning algorithms and evaluation of the relative performance of algorithms for classification, localization, and segmentation. We present FracAtlas, a new dataset of X-Ray scans curated from the images collected from 3 major hospitals in Bangladesh. Our dataset includes 4,083 images that have been manually annotated for bone fracture classification, localization, and segmentation with the help of 2 expert radiologists and an orthopedist using the open-source labeling platform, makesense.ai. There are 717 images with 922 instances of fractures. Each of the fracture instances has its own mask and bounding box, whereas the scans also have global labels for classification tasks. We believe the dataset will be a valuable resource for researchers interested in developing and evaluating machine learning algorithms for bone fracture diagnosis.
Collapse
Affiliation(s)
- Iftekharul Abedeen
- Islamic University of Technology, Gazipur, 1704, Bangladesh
- United International University, Dhaka, 1212, Bangladesh
| | - Md Ashiqur Rahman
- Islamic University of Technology, Gazipur, 1704, Bangladesh
- United International University, Dhaka, 1212, Bangladesh
| | | | - Tasnim Ahmed
- Islamic University of Technology, Gazipur, 1704, Bangladesh.
| | | | | |
Collapse
|
20
|
ROZWAG C, VALENTINI F, COTTEN A, DEMONDION X, PREUX P, JACQUES T. Elbow trauma in children: development and evaluation of radiological artificial intelligence models. RESEARCH IN DIAGNOSTIC AND INTERVENTIONAL IMAGING 2023; 6:100029. [PMID: 39077546 PMCID: PMC11265386 DOI: 10.1016/j.redii.2023.100029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 04/24/2023] [Indexed: 07/31/2024]
Abstract
Rationale and Objectives To develop a model using artificial intelligence (A.I.) able to detect post-traumatic injuries on pediatric elbow X-rays then to evaluate its performances in silico and its impact on radiologists' interpretation in clinical practice. Material and Methods A total of 1956 pediatric elbow radiographs performed following a trauma were retrospectively collected from 935 patients aged between 0 and 18 years. Deep convolutional neural networks were trained on these X-rays. The two best models were selected then evaluated on an external test set involving 120 patients, whose X-rays were performed on a different radiological equipment in another time period. Eight radiologists interpreted this external test set without then with the help of the A.I. models . Results Two models stood out: model 1 had an accuracy of 95.8% and an AUROC of 0.983 and model 2 had an accuracy of 90.5% and an AUROC of 0.975. On the external test set, model 1 kept a good accuracy of 82.5% and AUROC of 0.916 while model 2 had a loss of accuracy down to 69.2% and of AUROC to 0.793. Model 1 significantly improved radiologist's sensitivity (0.82 to 0.88, P = 0.016) and accuracy (0.86 to 0.88, P = 0,047) while model 2 significantly decreased specificity of readers (0.86 to 0.83, P = 0.031). Conclusion End-to-end development of a deep learning model to assess post-traumatic injuries on elbow X-ray in children was feasible and showed that models with close metrics in silico can unpredictably lead radiologists to either improve or lower their performances in clinical settings.
Collapse
Affiliation(s)
- Clémence ROZWAG
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| | - Franck VALENTINI
- Université de Lille , Lille, France
- Inria Lille – Nord Europe, équipe Scool, Lille, France
- CNRS UMR 9189 – CRIStAL, Lille, France
- École Centrale de Lille, Lille, France
| | - Anne COTTEN
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| | - Xavier DEMONDION
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| | - Philippe PREUX
- Université de Lille , Lille, France
- Inria Lille – Nord Europe, équipe Scool, Lille, France
- CNRS UMR 9189 – CRIStAL, Lille, France
- École Centrale de Lille, Lille, France
| | - Thibaut JACQUES
- Université de Lille , Lille, France
- Centre hospitalier universitaire de Lille, Lille, France
| |
Collapse
|
21
|
Lee KC, Choi IC, Kang CH, Ahn KS, Yoon H, Lee JJ, Kim BH, Shim E. Clinical Validation of an Artificial Intelligence Model for Detecting Distal Radius, Ulnar Styloid, and Scaphoid Fractures on Conventional Wrist Radiographs. Diagnostics (Basel) 2023; 13:diagnostics13091657. [PMID: 37175048 PMCID: PMC10178713 DOI: 10.3390/diagnostics13091657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/02/2023] [Accepted: 05/06/2023] [Indexed: 05/15/2023] Open
Abstract
This study aimed to assess the feasibility and performance of an artificial intelligence (AI) model for detecting three common wrist fractures: distal radius, ulnar styloid process, and scaphoid. The AI model was trained with a dataset of 4432 images containing both fractured and non-fractured wrist images. In total, 593 subjects were included in the clinical test. Two human experts independently diagnosed and labeled the fracture sites using bounding boxes to build the ground truth. Two novice radiologists also performed the same task, both with and without model assistance. The sensitivity, specificity, accuracy, and area under the curve (AUC) were calculated for each wrist location. The AUC for detecting distal radius, ulnar styloid, and scaphoid fractures per wrist were 0.903 (95% C.I. 0.887-0.918), 0.925 (95% C.I. 0.911-0.939), and 0.808 (95% C.I. 0.748-0.967), respectively. When assisted by the AI model, the scaphoid fracture AUC of the two novice radiologists significantly increased from 0.75 (95% C.I. 0.66-0.83) to 0.85 (95% C.I. 0.77-0.93) and from 0.71 (95% C.I. 0.62-0.80) to 0.80 (95% C.I. 0.71-0.88), respectively. Overall, the developed AI model was found to be reliable for detecting wrist fractures, particularly for scaphoid fractures, which are commonly missed.
Collapse
Affiliation(s)
- Kyu-Chong Lee
- Department of Radiology, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | - In Cheul Choi
- Department of Orthopedics Surgery, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | - Chang Ho Kang
- Department of Radiology, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | - Kyung-Sik Ahn
- Department of Radiology, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | - Heewon Yoon
- Department of Radiology, Korea University Anam Hospital, Seoul 02841, Republic of Korea
| | | | - Baek Hyun Kim
- Department of Radiology, Korea University Ansan Hospital, Ansan 15355, Republic of Korea
| | - Euddeum Shim
- Department of Radiology, Korea University Ansan Hospital, Ansan 15355, Republic of Korea
| |
Collapse
|
22
|
Anttila TT, Karjalainen TV, Mäkelä TO, Waris EM, Lindfors NC, Leminen MM, Ryhänen JO. Detecting Distal Radius Fractures Using a Segmentation-Based Deep Learning Model. J Digit Imaging 2023; 36:679-687. [PMID: 36542269 PMCID: PMC10039188 DOI: 10.1007/s10278-022-00741-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 11/08/2022] [Accepted: 11/09/2022] [Indexed: 12/24/2022] Open
Abstract
Deep learning algorithms can be used to classify medical images. In distal radius fracture treatment, fracture detection and radiographic assessment of fracture displacement are critical steps. The aim of this study was to use pixel-level annotations of fractures to develop a deep learning model for precise distal radius fracture detection. We randomly divided 3785 consecutive emergency wrist radiograph examinations from six hospitals to a training set (3399 examinations) and test set (386 examinations). The training set was used to develop the deep learning model and the test set to assess its validity. The consensus of three hand surgeons was used as the gold standard for the test set. The area under the ROC curve was 0.97 (CI 0.95-0.98) and 0.95 (CI 0.92-0.98) for examinations without a cast. Fractures were identified with higher accuracy in the postero-anterior radiographs than in the lateral radiographs. Our deep learning model performed well in our multi-hospital and multi-radiograph system manufacturer settings. Thus, segmentation-based deep learning models may provide additional benefit. Further research is needed with algorithm comparison and external validation.
Collapse
Affiliation(s)
- Turkka T Anttila
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland.
| | - Teemu V Karjalainen
- Department of Orthopedics, Traumatology and Hand Surgery, Central Finland Hospital, Jyvaskyla, Finland
| | - Teemu O Mäkelä
- Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Eero M Waris
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland
| | - Nina C Lindfors
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland
| | - Miika M Leminen
- Analytics and AI Development Services, IT Department, Helsinki University Hospital, Helsinki, Finland
- Department of Otorhinolaryngology and Phoniatrics, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Jorma O Ryhänen
- Musculoskeletal and Plastic Surgery, Department of Hand Surgery, University of Helsinki and Helsinki University Hospital, Topeliuksenkatu 5B, Helsinki, 00260, Finland
| |
Collapse
|
23
|
Kim T, Goh TS, Lee JS, Lee JH, Kim H, Jung ID. Transfer learning-based ensemble convolutional neural network for accelerated diagnosis of foot fractures. Phys Eng Sci Med 2023; 46:265-277. [PMID: 36625995 DOI: 10.1007/s13246-023-01215-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 01/02/2023] [Indexed: 01/11/2023]
Abstract
The complex shape of the foot, consisting of 26 bones, variable ligaments, tendons, and muscles leads to misdiagnosis of foot fractures. Despite the introduction of artificial intelligence (AI) to diagnose fractures, the accuracy of foot fracture diagnosis is lower than that of conventional methods. We developed an AI assistant system that assists with consistent diagnosis and helps interns or non-experts improve their diagnosis of foot fractures, and compared the effectiveness of the AI assistance on various groups with different proficiency. Contrast-limited adaptive histogram equalization was used to improve the visibility of original radiographs and data augmentation was applied to prevent overfitting. Preprocessed radiographs were fed to an ensemble model of a transfer learning-based convolutional neural network (CNN) that was developed for foot fracture detection with three models: InceptionResNetV2, MobilenetV1, and ResNet152V2. After training the model, score class activation mapping was applied to visualize the fracture based on the model prediction. The prediction result was evaluated by the receiver operating characteristic (ROC) curve and its area under the curve (AUC), and the F1-Score. Regarding the test set, the ensemble model exhibited better classification ability (F1-Score: 0.837, AUC: 0.95, Accuracy: 86.1%) than other single models that showed an accuracy of 82.4%. With AI assistance for the orthopedic fellow, resident, intern, and student group, the accuracy of each group improved by 3.75%, 7.25%, 6.25%, and 7% respectively and diagnosis time was reduced by 21.9%, 14.7%, 24.4%, and 34.6% respectively.
Collapse
Affiliation(s)
- Taekyeong Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea
| | - Tae Sik Goh
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Jung Sub Lee
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Ji Hyun Lee
- Health Insurance Review & Assessment Service, Wonju, 26465, Republic of Korea
| | - Hayeol Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea
| | - Im Doo Jung
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
24
|
A practical guide to the development and deployment of deep learning models for the orthopedic surgeon: part II. Knee Surg Sports Traumatol Arthrosc 2023; 31:1635-1643. [PMID: 36773057 DOI: 10.1007/s00167-023-07338-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/30/2023] [Indexed: 02/12/2023]
Abstract
Deep learning has the potential to be one of the most transformative technologies to impact orthopedic surgery. Substantial innovation in this area has occurred over the past 5 years, but clinically meaningful advancements remain limited by a disconnect between clinical and technical experts. That is, it is likely that few orthopedic surgeons possess both the clinical knowledge necessary to identify orthopedic problems, and the technical knowledge needed to implement deep learning-based solutions. To maximize the utilization of rapidly advancing technologies derived from deep learning models, orthopedic surgeons should understand the steps needed to design, organize, implement, and evaluate a deep learning project and its workflow. Equipping surgeons with this knowledge is the objective of this three-part editorial review. Part I described the processes involved in defining the problem, team building, data acquisition, curation, labeling, and establishing the ground truth. Building on that, this review (Part II) provides guidance on pre-processing and augmenting the data, making use of open-source libraries/toolkits, and selecting the required hardware to implement the pipeline. Special considerations regarding model training and evaluation unique to deep learning models relative to "shallow" machine learning models are also reviewed. Finally, guidance pertaining to the clinical deployment of deep learning models in the real world is provided. As in Part I, the focus is on applications of deep learning for computer vision and imaging.
Collapse
|
25
|
Moassefi M, Faghani S, Khosravi B, Rouzrokh P, Erickson BJ. Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges. Semin Roentgenol 2023; 58:170-177. [PMID: 37087137 DOI: 10.1053/j.ro.2023.01.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 02/17/2023]
|
26
|
Detecting pediatric wrist fractures using deep-learning-based object detection. Pediatr Radiol 2023; 53:1125-1134. [PMID: 36650360 DOI: 10.1007/s00247-023-05588-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND Missed fractures are the leading cause of diagnostic error in the emergency department, and fractures of pediatric bones, particularly subtle wrist fractures, can be misidentified because of their varying characteristics and responses to injury. OBJECTIVE This study evaluated the utility of an object detection deep learning framework for classifying pediatric wrist fractures as positive or negative for fracture, including subtle buckle fractures of the distal radius, and evaluated the performance of this algorithm as augmentation to trainee radiograph interpretation. MATERIALS AND METHODS We obtained 395 posteroanterior wrist radiographs from unique pediatric patients (65% positive for fracture, 30% positive for distal radial buckle fracture) and divided them into train (n = 229), tune (n = 41) and test (n = 125) sets. We trained a Faster R-CNN (region-based convolutional neural network) deep learning object-detection model. Two pediatric and two radiology residents evaluated radiographs initially without the artificial intelligence (AI) assistance, and then subsequently with access to the bounding box generated by the Faster R-CNN model. RESULTS The Faster R-CNN model demonstrated an area under the curve (AUC) of 0.92 (95% confidence interval [CI] 0.87-0.97), accuracy of 88% (n = 110/125; 95% CI 81-93%), sensitivity of 88% (n = 70/80; 95% CI 78-94%) and specificity of 89% (n = 40/45, 95% CI 76-96%) in identifying any fracture and identified 90% of buckle fractures (n = 35/39, 95% CI 76-97%). Access to Faster R-CNN model predictions significantly improved average resident accuracy from 80 to 93% in detecting any fracture (P < 0.001) and from 69 to 92% in detecting buckle fracture (P < 0.001). After accessing AI predictions, residents significantly outperformed AI in cases of disagreement (73% resident correct vs. 27% AI, P = 0.002). CONCLUSION An object-detection-based deep learning approach trained with only a few hundred examples identified radiographs containing pediatric wrist fractures with high accuracy. Access to model predictions significantly improved resident accuracy in diagnosing these fractures.
Collapse
|
27
|
Keller M, Guebeli A, Thieringer F, Honigmann P. Artificial intelligence in patient-specific hand surgery: a scoping review of literature. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02831-3. [PMID: 36633789 PMCID: PMC10363089 DOI: 10.1007/s11548-023-02831-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
PURPOSE The implementation of artificial intelligence in hand surgery and rehabilitation is gaining popularity. The purpose of this scoping review was to give an overview of implementations of artificial intelligence in hand surgery and rehabilitation and their current significance in clinical practice. METHODS A systematic literature search of the MEDLINE/PubMed and Cochrane Collaboration libraries was conducted. The review was conducted according to the framework outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews. A narrative summary of the papers is presented to give an orienting overview of this rapidly evolving topic. RESULTS Primary search yielded 435 articles. After application of the inclusion/exclusion criteria and addition of supplementary search, 235 articles were included in the final review. In order to facilitate navigation through this heterogenous field, the articles were clustered into four groups of thematically related publications. The most common applications of artificial intelligence in hand surgery and rehabilitation target automated image analysis of anatomic structures, fracture detection and localization and automated screening for other hand and wrist pathologies such as carpal tunnel syndrome, rheumatoid arthritis or osteoporosis. Compared to other medical subspecialties the number of applications in hand surgery is still small. CONCLUSION Although various promising applications of artificial intelligence in hand surgery and rehabilitation show strong performances, their implementation mostly takes place within the context of experimental studies. Therefore, their use in daily clinical routine is still limited.
Collapse
Affiliation(s)
- Marco Keller
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland. .,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.
| | - Alissa Guebeli
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland.,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Plastic and Hand Surgery, Kantonsspital Aarau, 5001, Aarau, Switzerland
| | - Florian Thieringer
- Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Oral and Cranio-Maxillofacial Surgery, University Hospital Basel, Basel, Switzerland
| | - Philipp Honigmann
- Hand Surgery, Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland, 4410, Liestal, Switzerland.,Medical Additive Manufacturing Research Group, Department of Biomedical Engineering, University of Basel, 4123, Allschwil, Switzerland.,Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
28
|
A Minority Class Balanced Approach Using the DCNN-LSTM Method to Detect Human Wrist Fracture. LIFE (BASEL, SWITZERLAND) 2023; 13:life13010133. [PMID: 36676082 PMCID: PMC9861673 DOI: 10.3390/life13010133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/19/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023]
Abstract
The emergency department of hospitals receives a massive number of patients with wrist fracture. For the clinical diagnosis of a suspected fracture, X-ray imaging is the major screening tool. A wrist fracture is a significant global health concern for children, adolescents, and the elderly. A missed diagnosis of wrist fracture on medical imaging can have significant consequences for patients, resulting in delayed treatment and poor functional recovery. Therefore, an intelligent method is needed in the medical department to precisely diagnose wrist fracture via an automated diagnosing tool by considering it a second option for doctors. In this research, a fused model of the deep learning method, a convolutional neural network (CNN), and long short-term memory (LSTM) is proposed to detect wrist fractures from X-ray images. It gives a second option to doctors to diagnose wrist facture using the computer vision method to lessen the number of missed fractures. The dataset acquired from Mendeley comprises 192 wrist X-ray images. In this framework, image pre-processing is applied, then the data augmentation approach is used to solve the class imbalance problem by generating rotated oversamples of images for minority classes during the training process, and pre-processed images and augmented normalized images are fed into a 28-layer dilated CNN (DCNN) to extract deep valuable features. Deep features are then fed to the proposed LSTM network to distinguish wrist fractures from normal ones. The experimental results of the DCNN-LSTM with and without augmentation is compared with other deep learning models. The proposed work is also compared to existing algorithms in terms of accuracy, sensitivity, specificity, precision, the F1-score, and kappa. The results show that the DCNN-LSTM fusion achieves higher accuracy and has high potential for medical applications to use as a second option.
Collapse
|
29
|
Lin X, Yan Z, Kuang Z, Zhang H, Deng X, Yu L. Fracture R-CNN: An anchor-efficient anti-interference framework for skull fracture detection in CT images. Med Phys 2022; 49:7179-7192. [PMID: 35713606 DOI: 10.1002/mp.15809] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/19/2022] [Accepted: 05/16/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Skull fracture, as a common traumatic brain injury, can lead to multiple complications including bleeding, leaking of cerebrospinal fluid, infection, and seizures. Automatic skull fracture detection (SFD) is of great importance, especially in emergency medicine. PURPOSE Existing algorithms for SFD, developed based on hand-crafted features, suffer from low detection accuracy due to poor generalizability to unseen samples. Deploying deep detectors designed for natural images like Faster Region-based Convolutional Neural Network (R-CNN) for SFD can be helpful but are of high redundancy and with nonnegligible false detections due to the cranial suture and skull base interference. Therefore, we, for the first time, propose an anchor-efficient anti-interference deep learning framework named Fracture R-CNN for accurate SFD with low computational cost. METHODS The proposed Fracture R-CNN is developed by incorporating the prior knowledge utilized in clinical diagnosis into the original Faster R-CNN. Specifically, based on the distributions of skull fractures, we first propose an adaptive anchoring region proposal network (AA-RPN) to generate proposals for diverse-scale fractures with low computational complexity. Then, based on the prior knowledge that cranial sutures exist in the junctions of bones and usually contain sclerotic margins, we design an anti-interference head (A-Head) network to eliminate the cranial suture interference for better SFD detection. In addition, to further enhance the anti-interference ability of the proposed A-Head, a difficulty-balanced weighted loss function is proposed to emphasize more on distinguishing the interference areas from the skull base and the cranial sutures during training. RESULTS Experimental results demonstrate that the proposed Fracture R-CNN outperforms the current state-of-the-art (SOTA) deep detectors for SFD with a higher recall and fewer false detections. Compared to Faster R-CNN, the proposed Fracture R-CNN improves the average precision (AP) by 11.74% and the free-response receiver operating characteristic (FROC) score by 11.08%. Through validating on various backbones, we further demonstrate the architecture independence of Fracture R-CNN, making it extendable to other detection applications. CONCLUSIONS As the customized deep learning-based framework for SFD, Fracture R-CNN can effectively overcome the unique challenges in SFD with less computational cost, leading to a better detection performance compared to the SOTA deep detectors. Moreover, we believe the prior knowledge explored for Fracture R-CNN would shed new light on future deep learning approaches for SFD.
Collapse
Affiliation(s)
- Xian Lin
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Zengqiang Yan
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Zhuo Kuang
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| | - Hang Zhang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xianbo Deng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Li Yu
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
30
|
Automated fracture screening using an object detection algorithm on whole-body trauma computed tomography. Sci Rep 2022; 12:16549. [PMID: 36192521 PMCID: PMC9529907 DOI: 10.1038/s41598-022-20996-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 09/21/2022] [Indexed: 11/28/2022] Open
Abstract
The emergency department is an environment with a potential risk for diagnostic errors during trauma care, particularly for fractures. Convolutional neural network (CNN) deep learning methods are now widely used in medicine because they improve diagnostic accuracy, decrease misinterpretation, and improve efficiency. In this study, we investigated whether automatic localization and classification using CNN could be applied to pelvic, rib, and spine fractures. We also examined whether this fracture detection algorithm could help physicians in fracture diagnosis. A total of 7664 whole-body CT axial slices (chest, abdomen, pelvis) from 200 patients were used. Sensitivity, precision, and F1-score were calculated to evaluate the performance of the CNN model. For the grouped mean values for pelvic, spine, or rib fractures, the sensitivity was 0.786, precision was 0.648, and F1-score was 0.711. Moreover, with CNN model assistance, surgeons showed improved sensitivity for detecting fractures and the time of reading and interpreting CT scans was reduced, especially for less experienced orthopedic surgeons. Application of the CNN model may lead to reductions in missed fractures from whole-body CT images and to faster workflows and improved patient care through efficient diagnosis in polytrauma patients.
Collapse
|
31
|
Hill BG, Krogue JD, Jevsevar DS, Schilling PL. Deep Learning and Imaging for the Orthopaedic Surgeon: How Machines "Read" Radiographs. J Bone Joint Surg Am 2022; 104:1675-1686. [PMID: 35867718 DOI: 10.2106/jbjs.21.01387] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
➤ In the not-so-distant future, orthopaedic surgeons will be exposed to machines that begin to automatically "read" medical imaging studies using a technology called deep learning. ➤ Deep learning has demonstrated remarkable progress in the analysis of medical imaging across a range of modalities that are commonly used in orthopaedics, including radiographs, computed tomographic scans, and magnetic resonance imaging scans. ➤ There is a growing body of evidence showing clinical utility for deep learning in musculoskeletal radiography, as evidenced by studies that use deep learning to achieve an expert or near-expert level of performance for the identification and localization of fractures on radiographs. ➤ Deep learning is currently in the very early stages of entering the clinical setting, involving validation and proof-of-concept studies for automated medical image interpretation. ➤ The success of deep learning in the analysis of medical imaging has been propelling the field forward so rapidly that now is the time for surgeons to pause and understand how this technology works at a conceptual level, before (not after) the technology ends up in front of us and our patients. That is the purpose of this article.
Collapse
Affiliation(s)
- Brandon G Hill
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Justin D Krogue
- Google Health, Palo Alto, California.,Department of Orthopaedic Surgery, University of California San Francisco, San Francisco, California
| | - David S Jevsevar
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.,The Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Peter L Schilling
- Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire.,The Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| |
Collapse
|
32
|
Koska OI, Çilengir AH, Uluç ME, Yücel A, Tosun Ö. All-star approach to a small medical imaging dataset: combined deep, transfer, and classical machine learning approaches for the determination of radial head fractures. Acta Radiol 2022; 64:1476-1483. [PMID: 36062584 DOI: 10.1177/02841851221122424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
BACKGROUND Radial head fractures are often evaluated in emergency departments and can easily be missed. Automated or semi-automated detection methods that help physicians may be valuable regarding the high miss rate. PURPOSE To evaluate the accuracy of combined deep, transfer, and classical machine learning approaches on a small dataset for determination of radial head fractures. MATERIAL AND METHODS A total of 48 patients with radial head fracture and 56 patients without fracture on elbow radiographs were retrospectively evaluated. The input images were obtained by cropping anteroposterior elbow radiographs around a center-point on the radial head. For fracture determination, an algorithm based on feature extraction using distinct prototypes of pretrained networks (VGG16, ResNet50, InceptionV3, MobileNetV2) representing four different approaches was developed. Reduction of feature space dimensions, feeding the most relevant features, and development of ensemble of classifiers were utilized. RESULTS The algorithm with the best performance consisted of preprocessing the input, computation of global maximum and global mean outputs of four distinct pretrained networks, dimensionality reduction by applying univariate and ensemble feature selectors, and applying Support Vector Machines and Random Forest classifiers to the transformed and reduced dataset. A maximum accuracy of 90% with MobileNetV2 pretrained features was reached for fracture determination with a small sample size. CONCLUSION Radial head fractures can be determined with a combined approach and limitations of the small sample size can be overcome by utilizing pretrained deep networks with classical machine learning methods.
Collapse
Affiliation(s)
- Ozgur I Koska
- Department of Biomedical Engineering, 37508Dokuz Eylül University Engineering Faculty, İzmir, Turkey.,ETHZ Computer Vision Laboratory, Zurich, Switzerland
| | | | - Muhsin Engin Uluç
- Department of Radiology, Izmir Katip Celebi University Ataturk Training and Research Hospital, Izmir, Turkey
| | - Aylin Yücel
- 534521Department of Radiology, Afyonkarahisar Health Sciences University, Afyonkarahisar, Turkey
| | - Özgür Tosun
- Department of Radiology, Izmir Katip Celebi University Ataturk Training and Research Hospital, Izmir, Turkey
| |
Collapse
|
33
|
Joshi D, Singh TP, Joshi AK. Deep learning-based localization and segmentation of wrist fractures on X-ray radiographs. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07510-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
34
|
Klontzas ME, Karantanas AH. Research in Musculoskeletal Radiology: Setting Goals and Strategic Directions. Semin Musculoskelet Radiol 2022; 26:354-358. [PMID: 35654100 DOI: 10.1055/s-0042-1748319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The future of musculoskeletal (MSK) radiology is being built on research developments in the field. Over the past decade, MSK imaging research has been dominated by advancements in molecular imaging biomarkers, artificial intelligence, radiomics, and novel high-resolution equipment. Adequate preparation of trainees and specialists will ensure that current and future leaders will be prepared to embrace and critically appraise technological developments, will be up to date on clinical developments, such as the use of artificial tissues, will define research directions, and will actively participate and lead multidisciplinary research. This review presents an overview of the current MSK research landscape and proposes tangible future goals and strategic directions that will fortify the future of MSK radiology.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece.,Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.,Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Apostolos H Karantanas
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece.,Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.,Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| |
Collapse
|
35
|
CCE-Net: A rib fracture diagnosis network based on contralateral, contextual, and edge enhanced modules. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
36
|
Kuo RYL, Harrison C, Curran TA, Jones B, Freethy A, Cussons D, Stewart M, Collins GS, Furniss D. Artificial Intelligence in Fracture Detection: A Systematic Review and Meta-Analysis. Radiology 2022; 304:50-62. [PMID: 35348381 DOI: 10.1148/radiol.211785] [Citation(s) in RCA: 67] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Background Patients with fractures are a common emergency presentation and may be misdiagnosed at radiologic imaging. An increasing number of studies apply artificial intelligence (AI) techniques to fracture detection as an adjunct to clinician diagnosis. Purpose To perform a systematic review and meta-analysis comparing the diagnostic performance in fracture detection between AI and clinicians in peer-reviewed publications and the gray literature (ie, articles published on preprint repositories). Materials and Methods A search of multiple electronic databases between January 2018 and July 2020 (updated June 2021) was performed that included any primary research studies that developed and/or validated AI for the purposes of fracture detection at any imaging modality and excluded studies that evaluated image segmentation algorithms. Meta-analysis with a hierarchical model to calculate pooled sensitivity and specificity was used. Risk of bias was assessed by using a modified Prediction Model Study Risk of Bias Assessment Tool, or PROBAST, checklist. Results Included for analysis were 42 studies, with 115 contingency tables extracted from 32 studies (55 061 images). Thirty-seven studies identified fractures on radiographs and five studies identified fractures on CT images. For internal validation test sets, the pooled sensitivity was 92% (95% CI: 88, 93) for AI and 91% (95% CI: 85, 95) for clinicians, and the pooled specificity was 91% (95% CI: 88, 93) for AI and 92% (95% CI: 89, 92) for clinicians. For external validation test sets, the pooled sensitivity was 91% (95% CI: 84, 95) for AI and 94% (95% CI: 90, 96) for clinicians, and the pooled specificity was 91% (95% CI: 81, 95) for AI and 94% (95% CI: 91, 95) for clinicians. There were no statistically significant differences between clinician and AI performance. There were 22 of 42 (52%) studies that were judged to have high risk of bias. Meta-regression identified multiple sources of heterogeneity in the data, including risk of bias and fracture type. Conclusion Artificial intelligence (AI) and clinicians had comparable reported diagnostic performance in fracture detection, suggesting that AI technology holds promise as a diagnostic adjunct in future clinical practice. Clinical trial registration no. CRD42020186641 © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Cohen and McInnes in this issue.
Collapse
Affiliation(s)
- Rachel Y L Kuo
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Conrad Harrison
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Terry-Ann Curran
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Benjamin Jones
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Alexander Freethy
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - David Cussons
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Max Stewart
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Gary S Collins
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| | - Dominic Furniss
- From the Nuffield Department of Orthopedics, Rheumatology and Musculoskeletal Sciences, Botnar Research Centre, Old Road Headington, Oxford OX3 7LD, UK (R.Y.L.K., C.H., M.S., G.S.C., D.F.); Department of Plastic Surgery, John Radcliffe Hospital, Oxford, UK (T.A.C., A.F.); Department of Vascular Surgery, Royal Berkshire Hospital, Reading, UK (B.J.); Department of Plastic Surgery, Stoke Mandeville Hospital, Aylesbury, Buckinghamshire UK (D.C.); and UK EQUATOR Center, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford Centre for Statistics in Medicine, Oxford UK (G.S.C.)
| |
Collapse
|
37
|
Kang Y, Ren Z, Zhang Y, Zhang A, Xu W, Zhang G, Dong Q. Deep Scale-Variant Network for Femur Trochanteric Fracture Classification with HP Loss. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1560438. [PMID: 35388324 PMCID: PMC8977323 DOI: 10.1155/2022/1560438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/22/2022] [Accepted: 02/17/2022] [Indexed: 11/18/2022]
Abstract
Achieving automatic classification of femur trochanteric fracture from the edge computing device is of great importance and value for remote diagnosis and treatment. Nevertheless, designing a highly accurate classification model on 31A1/31A2/31A3 fractures from the X-ray is still limited due to the failure of capturing the scale-variant and contextual information. As a result, this paper proposes a deep scale-variant (DSV) network with a hybrid and progressive (HP) loss function to aggregate more influential representations of the fracture regions. More specifically, the DSV network is based on the ResNet and integrated with the designed scale-variant (SV) layer and HP loss, where the SV layer aims to enhance the representation ability to extract the scale-variant features, and HP loss is intended to force the network to condense more contextual clues. Furthermore, to evaluate the effect of the proposed DSV network, we carry out a series of experiments on the real X-ray images for comparison and evaluation, and the experimental results demonstrate that the proposed DSV network could outperform other classification methods on this classification task.
Collapse
Affiliation(s)
- Yuxiang Kang
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| | - Zhipeng Ren
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| | - Yinguang Zhang
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| | - Aiming Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Weizhe Xu
- School of Computer Science, The University of Manchester, M14 5ta, Manchester, UK
| | - Guokai Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Qiang Dong
- Department of Orthopaedics, Tianjin Hospital, Tianjin 300211, China
| |
Collapse
|
38
|
Bone and Soft Tissue Tumors. Radiol Clin North Am 2022; 60:339-358. [DOI: 10.1016/j.rcl.2021.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
39
|
Rostad BS, Richer EJ, Riedesel EL, Alazraki AL. Esophageal discoid foreign body detection and classification using artificial intelligence. Pediatr Radiol 2022; 52:477-482. [PMID: 34850259 DOI: 10.1007/s00247-021-05240-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 10/05/2021] [Accepted: 10/31/2021] [Indexed: 01/04/2023]
Abstract
BACKGROUND Early and accurate radiographic diagnosis is required for the management of children with radio-opaque esophageal foreign bodies. Button batteries are some of the most dangerous esophageal foreign bodies and coins are among the most common. We hypothesized that artificial intelligence could be used to triage radiographs with esophageal button batteries and coins. OBJECTIVE Our primary objective was to train an object detector to detect esophageal foreign bodies, whether button battery or coin. Our secondary objective was to train an image classifier to classify the detected foreign body as either a button battery or a coin. MATERIALS AND METHODS We trained an object detector to detect button batteries and coins. The training data set for the object detector was 57 radiographs, consisting of 3 groups of 19 images each with either an esophageal button battery, esophageal coin or no foreign body. The foreign bodies were endoscopically confirmed, and the groups were age and gender matched. We then trained an image classifier to classify the detected foreign body as either a button battery or a coin. The training data set for the image classifier consisted of 19 radiographs of button batteries and 19 of coins, cropped from the object detector training data set. The object detector and image classifier were then tested on 103 radiographs with an esophageal foreign body, and 103 radiographs without a foreign body. RESULTS The object detector was 100% sensitive and specific for detecting an esophageal foreign body. The image classifier accurately classified all 6/6 (100%) button batteries in the testing data set and 93/95 (97.9%) of the coins. The remaining two coins were incorrectly classified as button batteries. In addition to these images with a single button battery or coin, there were two unique cases in the testing data set: a stacked button battery and coin, and two stacked coins, both of which were classified as coins. CONCLUSION Artificial intelligence models show promise in detecting and classifying esophageal discoid foreign bodies and could potentially be used to triage radiographs for radiologist interpretation.
Collapse
Affiliation(s)
- Bradley S Rostad
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1405 Clifton Rd. NE, Atlanta, GA, 30322, USA. .,Emory + Children's Pediatric Institute, Children's Healthcare of Atlanta, Atlanta, GA, USA.
| | - Edward J Richer
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1405 Clifton Rd. NE, Atlanta, GA, 30322, USA.,Emory + Children's Pediatric Institute, Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - Erica L Riedesel
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1405 Clifton Rd. NE, Atlanta, GA, 30322, USA.,Emory + Children's Pediatric Institute, Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - Adina L Alazraki
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1405 Clifton Rd. NE, Atlanta, GA, 30322, USA.,Emory + Children's Pediatric Institute, Children's Healthcare of Atlanta, Atlanta, GA, USA
| |
Collapse
|
40
|
Hardalaç F, Uysal F, Peker O, Çiçeklidağ M, Tolunay T, Tokgöz N, Kutbay U, Demirciler B, Mert F. Fracture Detection in Wrist X-ray Images Using Deep Learning-Based Object Detection Models. SENSORS 2022; 22:s22031285. [PMID: 35162030 PMCID: PMC8838335 DOI: 10.3390/s22031285] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/25/2022] [Accepted: 02/05/2022] [Indexed: 12/10/2022]
Abstract
Hospitals, especially their emergency services, receive a high number of wrist fracture cases. For correct diagnosis and proper treatment of these, images obtained from various medical equipment must be viewed by physicians, along with the patient’s medical records and physical examination. The aim of this study is to perform fracture detection by use of deep-learning on wrist X-ray images to support physicians in the diagnosis of these fractures, particularly in the emergency services. Using SABL, RegNet, RetinaNet, PAA, Libra R-CNN, FSAF, Faster R-CNN, Dynamic R-CNN and DCN deep-learning-based object detection models with various backbones, 20 different fracture detection procedures were performed on Gazi University Hospital’s dataset of wrist X-ray images. To further improve these procedures, five different ensemble models were developed and then used to reform an ensemble model to develop a unique detection model, ‘wrist fracture detection-combo (WFD-C)’. From 26 different models for fracture detection, the highest detection result obtained was 0.8639 average precision (AP50) in the WFD-C model. Huawei Turkey R&D Center supports this study within the scope of the ongoing cooperation project coded 071813 between Gazi University, Huawei and Medskor.
Collapse
Affiliation(s)
- Fırat Hardalaç
- Department of Electrical and Electronics Engineering, Faculty of Engineering, Gazi University, Ankara TR 06570, Turkey; (F.H.); (O.P.); (U.K.)
| | - Fatih Uysal
- Department of Electrical and Electronics Engineering, Faculty of Engineering, Gazi University, Ankara TR 06570, Turkey; (F.H.); (O.P.); (U.K.)
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Architecture, Kafkas University, Kars TR 36100, Turkey
- Correspondence: ; Tel.: +90-534-022-6128
| | - Ozan Peker
- Department of Electrical and Electronics Engineering, Faculty of Engineering, Gazi University, Ankara TR 06570, Turkey; (F.H.); (O.P.); (U.K.)
| | - Murat Çiçeklidağ
- Department of Orthopaedics and Traumatology, Faculty of Medicine, Gazi University, Ankara TR 06570, Turkey; (M.Ç.); (T.T.)
| | - Tolga Tolunay
- Department of Orthopaedics and Traumatology, Faculty of Medicine, Gazi University, Ankara TR 06570, Turkey; (M.Ç.); (T.T.)
| | - Nil Tokgöz
- Department of Radiology, Faculty of Medicine, Gazi University, Ankara TR 06570, Turkey;
| | - Uğurhan Kutbay
- Department of Electrical and Electronics Engineering, Faculty of Engineering, Gazi University, Ankara TR 06570, Turkey; (F.H.); (O.P.); (U.K.)
| | - Boran Demirciler
- Huawei Turkey R&D Center, İstanbul TR 34768, Turkey; (B.D.); (F.M.)
| | - Fatih Mert
- Huawei Turkey R&D Center, İstanbul TR 34768, Turkey; (B.D.); (F.M.)
| |
Collapse
|
41
|
Ren M, Yi PH. Deep learning detection of subtle fractures using staged algorithms to mimic radiologist search pattern. Skeletal Radiol 2022; 51:345-353. [PMID: 33576861 DOI: 10.1007/s00256-021-03739-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/25/2021] [Accepted: 02/07/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To develop and evaluate a two-stage deep convolutional neural network system that mimics a radiologist's search pattern for detecting two small fractures: triquetral avulsion fractures and Segond fractures. MATERIALS AND METHODS We obtained 231 lateral wrist radiographs and 173 anteroposterior knee radiographs from the Stanford MURA and LERA datasets and the public domain to train and validate a two-stage deep convolutional neural network system: (1) object detectors that crop the dorsal triquetrum or lateral tibial condyle, trained on control images, followed by (2) classifiers for triquetral and Segond fractures, trained on a 1:1 case:control split. A second set of classifiers was trained on uncropped images for comparison. External test sets of 50 lateral wrist radiographs and 24 anteroposterior knee radiographs were used to evaluate generalizability. Gradient-class activation mapping was used to inspect image regions of greater importance in deciding the final classification. RESULTS The object detectors accurately cropped the regions of interest in all validation and test images. The two-stage system achieved cross-validated area under the receiver operating characteristic curve values of 0.959 and 0.989 on triquetral and Segond fractures, compared with 0.860 (p = 0.0086) and 0.909 (p = 0.0074), respectively, for a one-stage classifier. Two-stage cross-validation accuracies were 90.8% and 92.5% for triquetral and Segond fractures, respectively. CONCLUSION A two-stage pipeline increases accuracy in the detection of subtle fractures on radiographs compared with a one-stage classifier and generalized well to external test data. Focusing attention on specific image regions appears to improve detection of subtle findings that may otherwise be missed.
Collapse
Affiliation(s)
- Mark Ren
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, MD, Baltimore, USA
| | - Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, MD, Baltimore, USA. .,University of Maryland Intelligent Imaging Center, Department of Radiology, University of Maryland School of Medicine, MD, Baltimore, USA. .,Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
42
|
Suzuki T, Maki S, Yamazaki T, Wakita H, Toguchi Y, Horii M, Yamauchi T, Kawamura K, Aramomi M, Sugiyama H, Matsuura Y, Yamashita T, Orita S, Ohtori S. Detecting Distal Radial Fractures from Wrist Radiographs Using a Deep Convolutional Neural Network with an Accuracy Comparable to Hand Orthopedic Surgeons. J Digit Imaging 2022; 35:39-46. [PMID: 34913132 PMCID: PMC8854542 DOI: 10.1007/s10278-021-00519-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 08/27/2021] [Accepted: 09/16/2021] [Indexed: 02/03/2023] Open
Abstract
In recent years, fracture image diagnosis using a convolutional neural network (CNN) has been reported. The purpose of the present study was to evaluate the ability of CNN to diagnose distal radius fractures (DRFs) using frontal and lateral wrist radiographs. We included 503 cases of DRF diagnosed by plain radiographs and 289 cases without fracture. We implemented the CNN model using Keras and Tensorflow. Frontal and lateral views of wrist radiographs were manually cropped and trained separately. Fine-tuning was performed using EfficientNets. The diagnostic ability of CNN was evaluated using 150 images with and without fractures from anteroposterior and lateral radiographs. The CNN model diagnosed DRF based on three views: frontal view, lateral view, and both frontal and lateral view. We determined the sensitivity, specificity, and accuracy of the CNN model, plotted a receiver operating characteristic (ROC) curve, and calculated the area under the ROC curve (AUC). We further compared performances between the CNN and three hand orthopedic surgeons. EfficientNet-B2 in the frontal view and EfficientNet-B4 in the lateral view showed highest accuracy on the validation dataset, and these models were used for combined views. The accuracy, sensitivity, and specificity of the CNN based on both anteroposterior and lateral radiographs were 99.3, 98.7, and 100, respectively. The accuracy of the CNN was equal to or better than that of three orthopedic surgeons. The AUC of the CNN on the combined views was 0.993. The CNN model exhibited high accuracy in the diagnosis of distal radius fracture with a plain radiograph.
Collapse
Affiliation(s)
- Takeshi Suzuki
- Department of Orthopedic Surgery, Tonosho Hospital, Chiba, Japan
| | - Satoshi Maki
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan ,grid.136304.30000 0004 0370 1101Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| | - Takahiro Yamazaki
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan
| | - Hiromasa Wakita
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan
| | - Yasunari Toguchi
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan
| | - Manato Horii
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan
| | - Tomonori Yamauchi
- grid.413946.dDepartment of Orthopedic Surgery, Asahi General Hospital, Chiba, Japan
| | - Koui Kawamura
- grid.413946.dDepartment of Orthopedic Surgery, Asahi General Hospital, Chiba, Japan
| | - Masaaki Aramomi
- grid.413946.dDepartment of Orthopedic Surgery, Asahi General Hospital, Chiba, Japan
| | - Hiroshi Sugiyama
- grid.413946.dDepartment of Orthopedic Surgery, Asahi General Hospital, Chiba, Japan
| | - Yusuke Matsuura
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan
| | - Takeshi Yamashita
- Department of Orthopaedic Surgery, Oyumino Central Hospital, Chiba, Japan
| | - Sumihisa Orita
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan ,grid.136304.30000 0004 0370 1101Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| | - Seiji Ohtori
- grid.136304.30000 0004 0370 1101Department of Orthopedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuou-ku, Chiba, 260-8670 Japan
| |
Collapse
|
43
|
Nakatsu K, Rahman R, Morita K, Fujita D, Kobashi S. Automatic Carpal Site Detection Method for Evaluation of Rheumatoid Arthritis Using Deep Learning. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2022. [DOI: 10.20965/jaciii.2022.p0042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Approximately 600,000 to 1,000,000 patients are diagnosed with rheumatoid arthritis (RA) in Japan. To provide appropriate treatment, it is necessary to accurately measure the progression of RA by diagnosing the disease several times a year. The modified total sharp score (mTSS) calculated from hand X-ray images is a standard diagnostic method for RA progression. However, this diagnostic method is time-consuming as the scores are rated at as many as 16 points per hand. Accordingly, in order to shorten the diagnosis time of RA patients and improve the quality of diagnosis, the development of computer-aided diagnosis (CAD) systems is expected. We have previously proposed a CAD system that can detect finger joint positions using a support vector machine and can estimate the mTSS using ridge regression. In this study, we propose a fully automatic detection method of RA score evaluation points in the carpal site from simple hand X-ray images using deep learning. The proposed method first segments the carpal site using deep learning. Next, the RA evaluation points are automatically determined from each segment based on prior knowledge. Experimental results on X-ray images of the hands of 140 patients with RA showed that the mTSS evaluation point at the carpal site could be detected with an average error of 25 pixels. This study enables the automatic detection of RA score evaluation points in the carpal site. In the diagnosis of RA, the time required for diagnosis can be reduced by automating the determination of diagnostic points by physician.
Collapse
|
44
|
Machine Learning in Medical Imaging – Clinical Applications and Challenges in Computer Vision. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
45
|
Moore MM, Iyer RS, Sarwani NI, Sze RW. Artificial intelligence development in pediatric body magnetic resonance imaging: best ideas to adapt from adults. Pediatr Radiol 2022; 52:367-373. [PMID: 33851261 PMCID: PMC8043435 DOI: 10.1007/s00247-021-05072-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 02/09/2021] [Accepted: 03/22/2021] [Indexed: 12/22/2022]
Abstract
Emerging manifestations of artificial intelligence (AI) have featured prominently in virtually all industries and facets of our lives. Within the radiology literature, AI has shown great promise in improving and augmenting radiologist workflow. In pediatric imaging, while greatest AI inroads have been made in musculoskeletal radiographs, there are certainly opportunities within thoracoabdominal MRI for AI to add significant value. In this paper, we briefly review non-interpretive and interpretive data science, with emphasis on potential avenues for advancement in pediatric body MRI based on similar work in adults. The discussion focuses on MRI image optimization, abdominal organ segmentation, and osseous lesion detection encountered during body MRI in children.
Collapse
Affiliation(s)
- Michael M Moore
- Department of Radiology, Penn State Children's Hospital, Penn State Health, 500 University Drive, H066, Hershey, PA, 17033, USA.
| | - Ramesh S Iyer
- Seattle Children's Hospital, University of Washington, Seattle, WA, USA
| | | | - Raymond W Sze
- Children's Hospital of Philadelphia, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
46
|
Janisch M, Apfaltrer G, Hržić F, Castellani C, Mittl B, Singer G, Lindbichler F, Pilhatsch A, Sorantin E, Tschauner S. Pediatric radius torus fractures in x-rays-how computer vision could render lateral projections obsolete. Front Pediatr 2022; 10:1005099. [PMID: 36589159 PMCID: PMC9794847 DOI: 10.3389/fped.2022.1005099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022] Open
Abstract
It is an indisputable dogma in extremity radiography to acquire x-ray studies in at least two complementary projections, which is also true for distal radius fractures in children. However, there is cautious hope that computer vision could enable breaking with this tradition in minor injuries, clinically lacking malalignment. We trained three different state-of-the-art convolutional neural networks (CNNs) on a dataset of 2,474 images: 1,237 images were posteroanterior (PA) pediatric wrist radiographs containing isolated distal radius torus fractures, and 1,237 images were normal controls without fractures. The task was to classify images into fractured and non-fractured. In total, 200 previously unseen images (100 per class) served as test set. CNN predictions reached area under the curves (AUCs) up to 98% [95% confidence interval (CI) 96.6%-99.5%], consistently exceeding human expert ratings (mean AUC 93.5%, 95% CI 89.9%-97.2%). Following training on larger data sets CNNs might be able to effectively rule out the presence of a distal radius fracture, enabling to consider foregoing the yet inevitable lateral projection in children. Built into the radiography workflow, such an algorithm could contribute to radiation hygiene and patient comfort.
Collapse
Affiliation(s)
- Michael Janisch
- Department of Radiology, Division of Neuroradiology, Vascular and Interventional Radiology, Medical University of Graz, Graz, Austria
| | - Georg Apfaltrer
- Department of Radiology, Division of Pediatric Radiology, Medical University of Graz, Graz, Austria
| | - Franko Hržić
- Department of Computer Engineering, Center for Artificial Intelligence and Cybersecurity, University of Rijeka Faculty of Engineering, Rijeka, Croatia
| | - Christoph Castellani
- Department of Paediatric and Adolescent Surgery, Medical University of Graz, Graz, Austria
| | - Barbara Mittl
- Department of Paediatric and Adolescent Surgery, Medical University of Graz, Graz, Austria
| | - Georg Singer
- Department of Paediatric and Adolescent Surgery, Medical University of Graz, Graz, Austria
| | - Franz Lindbichler
- Department of Radiology, Division of Pediatric Radiology, Medical University of Graz, Graz, Austria
| | - Alexander Pilhatsch
- Department of Radiology, Division of Pediatric Radiology, Medical University of Graz, Graz, Austria
| | - Erich Sorantin
- Department of Radiology, Division of Pediatric Radiology, Medical University of Graz, Graz, Austria
| | - Sebastian Tschauner
- Department of Radiology, Division of Pediatric Radiology, Medical University of Graz, Graz, Austria
| |
Collapse
|
47
|
Yao L, Guan X, Song X, Tan Y, Wang C, Jin C, Chen M, Wang H, Zhang M. Rib fracture detection system based on deep learning. Sci Rep 2021; 11:23513. [PMID: 34873241 PMCID: PMC8648839 DOI: 10.1038/s41598-021-03002-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 11/25/2021] [Indexed: 01/17/2023] Open
Abstract
Rib fracture detection is time-consuming and demanding work for radiologists. This study aimed to introduce a novel rib fracture detection system based on deep learning which can help radiologists to diagnose rib fractures in chest computer tomography (CT) images conveniently and accurately. A total of 1707 patients were included in this study from a single center. We developed a novel rib fracture detection system on chest CT using a three-step algorithm. According to the examination time, 1507, 100 and 100 patients were allocated to the training set, the validation set and the testing set, respectively. Free Response ROC analysis was performed to evaluate the sensitivity and false positivity of the deep learning algorithm. Precision, recall, F1-score, negative predictive value (NPV) and detection and diagnosis were selected as evaluation metrics to compare the diagnostic efficiency of this system with radiologists. The radiologist-only study was used as a benchmark and the radiologist-model collaboration study was evaluated to assess the model's clinical applicability. A total of 50,170,399 blocks (fracture blocks, 91,574; normal blocks, 50,078,825) were labelled for training. The F1-score of the Rib Fracture Detection System was 0.890 and the precision, recall and NPV values were 0.869, 0.913 and 0.969, respectively. By interacting with this detection system, the F1-score of the junior and the experienced radiologists had improved from 0.796 to 0.925 and 0.889 to 0.970, respectively; the recall scores had increased from 0.693 to 0.920 and 0.853 to 0.972, respectively. On average, the diagnosis time of radiologist assisted with this detection system was reduced by 65.3 s. The constructed Rib Fracture Detection System has a comparable performance with the experienced radiologist and is readily available to automatically detect rib fracture in the clinical setting with high efficacy, which could reduce diagnosis time and radiologists' workload in the clinical practice.
Collapse
Affiliation(s)
- Liding Yao
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, No.88 Jiefang Road, Shangcheng District, Hangzhou, 310009, Zhejiang, China
| | - Xiaojun Guan
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, No.88 Jiefang Road, Shangcheng District, Hangzhou, 310009, Zhejiang, China
| | - Xiaowei Song
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, No.88 Jiefang Road, Shangcheng District, Hangzhou, 310009, Zhejiang, China
| | - Yanbin Tan
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, No.88 Jiefang Road, Shangcheng District, Hangzhou, 310009, Zhejiang, China
| | - Chun Wang
- Hithink RoyalFlush Information Network Co., Ltd, No. 18 Tongshun Street, Yuhang District, Hangzhou, 310012, Zhejiang, China
| | - Chaohui Jin
- Hithink RoyalFlush Information Network Co., Ltd, No. 18 Tongshun Street, Yuhang District, Hangzhou, 310012, Zhejiang, China
| | - Ming Chen
- Hithink RoyalFlush Information Network Co., Ltd, No. 18 Tongshun Street, Yuhang District, Hangzhou, 310012, Zhejiang, China
| | - Huogen Wang
- Hithink RoyalFlush Information Network Co., Ltd, No. 18 Tongshun Street, Yuhang District, Hangzhou, 310012, Zhejiang, China.
| | - Minming Zhang
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, No.88 Jiefang Road, Shangcheng District, Hangzhou, 310009, Zhejiang, China.
| |
Collapse
|
48
|
Wei D, Wu Q, Wang X, Tian M, Li B. Accurate Instance Segmentation in Pediatric Elbow Radiographs. SENSORS (BASEL, SWITZERLAND) 2021; 21:7966. [PMID: 34883969 PMCID: PMC8659701 DOI: 10.3390/s21237966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 11/23/2021] [Accepted: 11/28/2021] [Indexed: 11/17/2022]
Abstract
Radiography is an essential basis for the diagnosis of fractures. For the pediatric elbow joint diagnosis, the doctor needs to diagnose abnormalities based on the location and shape of each bone, which is a great challenge for AI algorithms when interpreting radiographs. Bone instance segmentation is an effective upstream task for automatic radiograph interpretation. Pediatric elbow bone instance segmentation is a process by which each bone is extracted separately from radiography. However, the arbitrary directions and the overlapping of bones pose issues for bone instance segmentation. In this paper, we design a detection-segmentation pipeline to tackle these problems by using rotational bounding boxes to detect bones and proposing a robust segmentation method. The proposed pipeline mainly contains three parts: (i) We use Faster R-CNN-style architecture to detect and locate bones. (ii) We adopt the Oriented Bounding Box (OBB) to improve the localizing accuracy. (iii) We design the Global-Local Fusion Segmentation Network to combine the global and local contexts of the overlapped bones. To verify the effectiveness of our proposal, we conduct experiments on our self-constructed dataset that contains 1274 well-annotated pediatric elbow radiographs. The qualitative and quantitative results indicate that the network significantly improves the performance of bone extraction. Our methodology has good potential for applying deep learning in the radiography's bone instance segmentation.
Collapse
Affiliation(s)
| | - Qiongshui Wu
- Electronic Information School, Wuhan University, Wuhan 430072, China; (D.W.); (X.W.); (M.T.); (B.L.)
| | | | | | | |
Collapse
|
49
|
Filice RW, Kahn CE. Biomedical Ontologies to Guide AI Development in Radiology. J Digit Imaging 2021; 34:1331-1341. [PMID: 34724143 PMCID: PMC8669056 DOI: 10.1007/s10278-021-00527-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 04/27/2021] [Accepted: 10/13/2021] [Indexed: 10/25/2022] Open
Abstract
The advent of deep learning has engendered renewed and rapidly growing interest in artificial intelligence (AI) in radiology to analyze images, manipulate textual reports, and plan interventions. Applications of deep learning and other AI approaches must be guided by sound medical knowledge to assure that they are developed successfully and that they address important problems in biomedical research or patient care. To date, AI has been applied to a limited number of real-world radiology applications. As AI systems become more pervasive and are applied more broadly, they will benefit from medical knowledge on a larger scale, such as that available through computer-based approaches. A key approach to represent computer-based knowledge in a particular domain is an ontology. As defined in informatics, an ontology defines a domain's terms through their relationships with other terms in the ontology. Those relationships, then, define the terms' semantics, or "meaning." Biomedical ontologies commonly define the relationships between terms and more general terms, and can express causal, part-whole, and anatomic relationships. Ontologies express knowledge in a form that is both human-readable and machine-computable. Some ontologies, such as RSNA's RadLex radiology lexicon, have been applied to applications in clinical practice and research, and may be familiar to many radiologists. This article describes how ontologies can support research and guide emerging applications of AI in radiology, including natural language processing, image-based machine learning, radiomics, and planning.
Collapse
Affiliation(s)
- Ross W Filice
- Department of Radiology, MedStar Georgetown University Hospital, Washington, DC, USA
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA, 19104, USA.
| |
Collapse
|
50
|
Abstract
We present an overview of current clinical musculoskeletal imaging applications for artificial intelligence, as well as potential future applications and techniques.
Collapse
|