1
|
Li L, Wong MS. The application of machine learning methods for predicting the progression of adolescent idiopathic scoliosis: a systematic review. Biomed Eng Online 2024; 23:80. [PMID: 39118179 PMCID: PMC11308564 DOI: 10.1186/s12938-024-01272-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 07/22/2024] [Indexed: 08/10/2024] Open
Abstract
Predicting curve progression during the initial visit is pivotal in the disease management of patients with adolescent idiopathic scoliosis (AIS)-identifying patients at high risk of progression is essential for timely and proactive interventions. Both radiological and clinical factors have been investigated as predictors of curve progression. With the evolution of machine learning technologies, the integration of multidimensional information now enables precise predictions of curve progression. This review focuses on the application of machine learning methods to predict AIS curve progression, analyzing 15 selected studies that utilize various machine learning models and the risk factors employed for predictions. Key findings indicate that machine learning models can provide higher precision in predictions compared to traditional methods, and their implementation could lead to more personalized patient management. However, due to the model interpretability and data complexity, more comprehensive and multi-center studies are needed to transition from research to clinical practice.
Collapse
Affiliation(s)
- Lening Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China.
| | - Man-Sang Wong
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
2
|
Polzer C, Yilmaz E, Meyer C, Jang H, Jansen O, Lorenz C, Bürger C, Glüer CC, Sedaghat S. AI-based automated detection and stability analysis of traumatic vertebral body fractures on computed tomography. Eur J Radiol 2024; 173:111364. [PMID: 38364589 DOI: 10.1016/j.ejrad.2024.111364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/29/2023] [Accepted: 02/08/2024] [Indexed: 02/18/2024]
Abstract
PURPOSE We developed and tested a neural network for automated detection and stability analysis of vertebral body fractures on computed tomography (CT). MATERIALS AND METHODS 257 patients who underwent CT were included in this Institutional Review Board (IRB) approved study. 463 fractured and 1883 non-fractured vertebral bodies were included, with 190 fractures unstable. Two readers identified vertebral body fractures and assessed their stability. A combination of a Hierarchical Convolutional Neural Network (hNet) and a fracture Classification Network (fNet) was used to build a neural network for the automated detection and stability analysis of vertebral body fractures on CT. Two final test settings were chosen: one with vertebral body levels C1/2 included and one where they were excluded. RESULTS The mean age of the patients was 68 ± 14 years. 140 patients were female. The network showed a slightly higher diagnostic performance when excluding C1/2. Accordingly, the network was able to distinguish fractured and non-fractured vertebral bodies with a sensitivity of 75.8 % and a specificity of 80.3 %. Additionally, the network determined the stability of the vertebral bodies with a sensitivity of 88.4 % and a specificity of 80.3 %. The AUC was 87 % and 91 % for fracture detection and stability analysis, respectively. The sensitivity of our network in indicating the presence of at least one fracture / one unstable fracture within the whole spine achieved values of 78.7 % and 97.2 %, respectively, when excluding C1/2. CONCLUSION The developed neural network can automatically detect vertebral body fractures and evaluate their stability concurrently with a high diagnostic performance.
Collapse
Affiliation(s)
- Constanze Polzer
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Eren Yilmaz
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany; Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany
| | - Carsten Meyer
- Department of Computer Science, Ostfalia University of Applied Sciences, Wolfenbüttel, Germany; Department of Computer Science, Faculty of Engineering, Kiel University, Kiel, Germany
| | - Hyungseok Jang
- Department of Radiology, University of California San Diego, San Diego, USA
| | - Olav Jansen
- Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | | | | | - Claus-Christian Glüer
- Section Biomedical Imaging, Department of Radiology and Neuroradiology, University Hospital Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Sam Sedaghat
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
3
|
Bharadwaj UU, Chin CT, Majumdar S. Practical Applications of Artificial Intelligence in Spine Imaging: A Review. Radiol Clin North Am 2024; 62:355-370. [PMID: 38272627 DOI: 10.1016/j.rcl.2023.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI), a transformative technology with unprecedented potential in medical imaging, can be applied to various spinal pathologies. AI-based approaches may improve imaging efficiency, diagnostic accuracy, and interpretation, which is essential for positive patient outcomes. This review explores AI algorithms, techniques, and applications in spine imaging, highlighting diagnostic impact and challenges with future directions for integrating AI into spine imaging workflow.
Collapse
Affiliation(s)
- Upasana Upadhyay Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| | - Cynthia T Chin
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Avenue, Box 0628, San Francisco, CA 94143, USA.
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 1700 4th Street, Byers Hall, Suite 203, Room 203D, San Francisco, CA 94158, USA
| |
Collapse
|
4
|
Chen Y, Mo Y, Readie A, Ligozio G, Mandal I, Jabbar F, Coroller T, Papież BW. VertXNet: an ensemble method for vertebral body segmentation and identification from cervical and lumbar spinal X-rays. Sci Rep 2024; 14:3341. [PMID: 38336974 PMCID: PMC10858234 DOI: 10.1038/s41598-023-49923-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 12/13/2023] [Indexed: 02/12/2024] Open
Abstract
Accurate annotation of vertebral bodies is crucial for automating the analysis of spinal X-ray images. However, manual annotation of these structures is a laborious and costly process due to their complex nature, including small sizes and varying shapes. To address this challenge and expedite the annotation process, we propose an ensemble pipeline called VertXNet. This pipeline currently combines two segmentation mechanisms, semantic segmentation using U-Net, and instance segmentation using Mask R-CNN, to automatically segment and label vertebral bodies in lateral cervical and lumbar spinal X-ray images. VertXNet enhances its effectiveness by adopting a rule-based strategy (termed the ensemble rule) for effectively combining segmentation outcomes from U-Net and Mask R-CNN. It determines vertebral body labels by recognizing specific reference vertebral instances, such as cervical vertebra 2 ('C2') in cervical spine X-rays and sacral vertebra 1 ('S1') in lumbar spine X-rays. Those references are commonly relatively easy to identify at the edge of the spine. To assess the performance of our proposed pipeline, we conducted evaluations on three spinal X-ray datasets, including two in-house datasets and one publicly available dataset. The ground truth annotations were provided by radiologists for comparison. Our experimental results have shown that the proposed pipeline outperformed two state-of-the-art (SOTA) segmentation models on our test dataset with a mean Dice of 0.90, vs. a mean Dice of 0.73 for Mask R-CNN and 0.72 for U-Net. We also demonstrated that VertXNet is a modular pipeline that enables using other SOTA model, like nnU-Net to further improve its performance. Furthermore, to evaluate the generalization ability of VertXNet on spinal X-rays, we directly tested the pre-trained pipeline on two additional datasets. A consistently strong performance was observed, with mean Dice coefficients of 0.89 and 0.88, respectively. In summary, VertXNet demonstrated significantly improved performance in vertebral body segmentation and labeling for spinal X-ray imaging. Its robustness and generalization were presented through the evaluation of both in-house clinical trial data and publicly available datasets.
Collapse
Affiliation(s)
- Yao Chen
- Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA
| | - Yuanhan Mo
- Big Data Institute, University of Oxford, Oxford, UK
| | - Aimee Readie
- Novartis Pharmaceuticals Corporation, East Hanover, NJ, USA
| | | | - Indrajeet Mandal
- John Radcliffe Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Faiz Jabbar
- John Radcliffe Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | | | | |
Collapse
|
5
|
Sheng T, Mathai TS, Shieh A, Summers RM. Weakly-Supervised Detection of Bone Lesions in CT. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12927:129270Q. [PMID: 38974478 PMCID: PMC11225794 DOI: 10.1117/12.3008823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/09/2024]
Abstract
The skeletal region is one of the common sites of metastatic spread of cancer in the breast and prostate. CT is routinely used to measure the size of lesions in the bones. However, they can be difficult to spot due to the wide variations in their sizes, shapes, and appearances. Precise localization of such lesions would enable reliable tracking of interval changes (growth, shrinkage, or unchanged status). To that end, an automated technique to detect bone lesions is highly desirable. In this pilot work, we developed a pipeline to detect bone lesions (lytic, blastic, and mixed) in CT volumes via a proxy segmentation task. First, we used the bone lesions that were prospectively marked by radiologists in a few 2D slices of CT volumes and converted them into weak 3D segmentation masks. Then, we trained a 3D full-resolution nnUNet model using these weak 3D annotations to segment the lesions and thereby detected them. Our automated method detected bone lesions in CT with a precision of 96.7% and recall of 47.3% despite the use of incomplete and partial training data. To the best of our knowledge, we are the first to attempt the direct detection of bone lesions in CT via a proxy segmentation task.
Collapse
Affiliation(s)
- Tao Sheng
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, USA
| | - Tejas Sudharshan Mathai
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, USA
| | - Alexander Shieh
- Departments of Interventional Radiology and Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, USA
| |
Collapse
|
6
|
Sheng T, Mathai TS, Shieh A, Summers RM. Weakly-Supervised Detection of Bone Lesions in CT. ARXIV 2024:arXiv:2402.00175v1. [PMID: 38529078 PMCID: PMC10962744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The skeletal region is one of the common sites of metastatic spread of cancer in the breast and prostate. CT is routinely used to measure the size of lesions in the bones. However, they can be difficult to spot due to the wide variations in their sizes, shapes, and appearances. Precise localization of such lesions would enable reliable tracking of interval changes (growth, shrinkage, or unchanged status). To that end, an automated technique to detect bone lesions is highly desirable. In this pilot work, we developed a pipeline to detect bone lesions (lytic, blastic, and mixed) in CT volumes via a proxy segmentation task. First, we used the bone lesions that were prospectively marked by radiologists in a few 2D slices of CT volumes and converted them into weak 3D segmentation masks. Then, we trained a 3D full-resolution nnUNet model using these weak 3D annotations to segment the lesions and thereby detected them. Our automated method detected bone lesions in CT with a precision of 96.7% and recall of 47.3% despite the use of incomplete and partial training data. To the best of our knowledge, we are the first to attempt the direct detection of bone lesions in CT via a proxy segmentation task.
Collapse
Affiliation(s)
- Tao Sheng
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, USA
| | - Tejas Sudharshan Mathai
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, USA
| | - Alexander Shieh
- Departments of Interventional Radiology and Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, USA
| |
Collapse
|
7
|
Maki S, Furuya T, Inoue M, Shiga Y, Inage K, Eguchi Y, Orita S, Ohtori S. Machine Learning and Deep Learning in Spinal Injury: A Narrative Review of Algorithms in Diagnosis and Prognosis. J Clin Med 2024; 13:705. [PMID: 38337399 PMCID: PMC10856760 DOI: 10.3390/jcm13030705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/14/2024] [Accepted: 01/18/2024] [Indexed: 02/12/2024] Open
Abstract
Spinal injuries, including cervical and thoracolumbar fractures, continue to be a major public health concern. Recent advancements in machine learning and deep learning technologies offer exciting prospects for improving both diagnostic and prognostic approaches in spinal injury care. This narrative review systematically explores the practical utility of these computational methods, with a focus on their application in imaging techniques such as computed tomography (CT) and magnetic resonance imaging (MRI), as well as in structured clinical data. Of the 39 studies included, 34 were focused on diagnostic applications, chiefly using deep learning to carry out tasks like vertebral fracture identification, differentiation between benign and malignant fractures, and AO fracture classification. The remaining five were prognostic, using machine learning to analyze parameters for predicting outcomes such as vertebral collapse and future fracture risk. This review highlights the potential benefit of machine learning and deep learning in spinal injury care, especially their roles in enhancing diagnostic capabilities, detailed fracture characterization, risk assessments, and individualized treatment planning.
Collapse
Affiliation(s)
- Satoshi Maki
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Masahiro Inoue
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| | - Sumihisa Orita
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
- Center for Frontier Medical Engineering, Chiba University, Chiba 263-8522, Japan
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, Chiba 260-8670, Japan
| |
Collapse
|
8
|
Sebro R, De la Garza-Ramos C, Peterson JJ. Detecting whether L1 or other lumbar levels would be excluded from DXA bone mineral density analysis during opportunistic CT screening for osteoporosis using machine learning. Int J Comput Assist Radiol Surg 2023; 18:2261-2272. [PMID: 37219803 DOI: 10.1007/s11548-023-02910-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 04/04/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE One or more vertebrae are sometimes excluded from dual-energy X-ray absorptiometry (DXA) analysis if the bone mineral density (BMD) T-score estimates are not consistent with the other lumbar vertebrae BMD T-score estimates. The goal of this study was to build a machine learning framework to identify which vertebrae would be excluded from DXA analysis based on the computed tomography (CT) attenuation of the vertebrae. METHODS Retrospective review of 995 patients (69.0% female) aged 50 years or greater with CT scans of the abdomen/pelvis and DXA within 1 year of each other. Volumetric semi-automated segmentation of each vertebral body was performed using 3D-Slicer to obtain the CT attenuation of each vertebra. Radiomic features based on the CT attenuation of the lumbar vertebrae were created. The data were randomly split into training/validation (90%) and test datasets (10%). We used two multivariate machine learning models: a support vector machine (SVM) and a neural net (NN) to predict which vertebra(e) were excluded from DXA analysis. RESULTS L1, L2, L3, and L4 were excluded from DXA in 8.7% (87/995), 9.9% (99/995), 32.3% (321/995), and 42.6% (424/995) patients, respectively. The SVM had a higher area under the curve (AUC = 0.803) than the NN (AUC = 0.589) for predicting whether L1 would be excluded from DXA analysis (P = 0.015) in the test dataset. The SVM was better than the NN for predicting whether L2 (AUC = 0.757 compared to AUC = 0.478), L3 (AUC = 0.699 compared to AUC = 0.555), or L4 (AUC = 0.751 compared to AUC = 0.639) were excluded from DXA analysis. CONCLUSIONS Machine learning algorithms could be used to identify which lumbar vertebrae would be excluded from DXA analysis and should not be used for opportunistic CT screening analysis. The SVM was better than the NN for identifying which lumbar vertebra should not be used for opportunistic CT screening analysis.
Collapse
Affiliation(s)
- Ronnie Sebro
- Department of Radiology, Mayo Clinic, Jacksonville, FL, 32224, USA.
- Center for Augmented Intelligence, Mayo Clinic, Jacksonville, FL, 32224, USA.
| | | | | |
Collapse
|
9
|
Ibanez V, Jucker D, Ebert LC, Franckenberg S, Dobay A. Classification of rib fracture types from postmortem computed tomography images using deep learning. Forensic Sci Med Pathol 2023:10.1007/s12024-023-00751-x. [PMID: 37968549 DOI: 10.1007/s12024-023-00751-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/05/2023] [Indexed: 11/17/2023]
Abstract
Human or time resources can sometimes fall short in medical image diagnostics, and analyzing images in full detail can be a challenging task. With recent advances in artificial intelligence, an increasing number of systems have been developed to assist clinicians in their work. In this study, the objective was to train a model that can distinguish between various fracture types on different levels of hierarchical taxonomy and detect them on 2D-image representations of volumetric postmortem computed tomography (PMCT) data. We used a deep learning model based on the ResNet50 architecture that was pretrained on ImageNet data, and we used transfer learning to fine-tune it to our specific task. We trained our model to distinguish between "displaced," "nondisplaced," "ad latus," "ad longitudinem cum contractione," and "ad longitudinem cum distractione" fractures. Radiographs with no fractures were correctly predicted in 95-99% of cases. Nondisplaced fractures were correctly predicted in 80-86% of cases. Displaced fractures of the "ad latus" type were correctly predicted in 17-18% of cases. The other two displaced types of fractures, "ad longitudinem cum contractione" and "ad longitudinem cum distractione," were correctly predicted in 70-75% and 64-75% of cases, respectively. The model achieved the best performance when the level of hierarchical taxonomy was high, while it had more difficulties when the level of hierarchical taxonomy was lower. Overall, deep learning techniques constitute a reliable solution for forensic pathologists and medical practitioners seeking to reduce workload.
Collapse
Affiliation(s)
- Victor Ibanez
- Forensic Machine Learning Technology Center, Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Dario Jucker
- Zurich Institute of Forensic Medicine, 3D Centre Zurich, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Lars C Ebert
- Zurich Institute of Forensic Medicine, 3D Centre Zurich, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Sabine Franckenberg
- Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- Zurich Institute of Forensic Medicine, 3D Centre Zurich, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Akos Dobay
- Forensic Machine Learning Technology Center, Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland.
| |
Collapse
|
10
|
Foley D, Hardacker P, McCarthy M. Emerging Technologies within Spine Surgery. Life (Basel) 2023; 13:2028. [PMID: 37895410 PMCID: PMC10608700 DOI: 10.3390/life13102028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/02/2023] [Accepted: 10/07/2023] [Indexed: 10/29/2023] Open
Abstract
New innovations within spine surgery continue to propel the field forward. These technologies improve surgeons' understanding of their patients and allow them to optimize treatment planning both in the operating room and clinic. Additionally, changes in the implants and surgeon practice habits continue to evolve secondary to emerging biomaterials and device design. With ongoing advancements, patients can expect enhanced preoperative decision-making, improved patient outcomes, and better intraoperative execution. Additionally, these changes may decrease many of the most common complications following spine surgery in order to reduce morbidity, mortality, and the need for reoperation. This article reviews some of these technological advancements and how they are projected to impact the field. As the field continues to advance, it is vital that practitioners remain knowledgeable of these changes in order to provide the most effective treatment possible.
Collapse
Affiliation(s)
- David Foley
- Department of Orthopaedic Surgery, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Pierce Hardacker
- Indiana University School of Medicine, Indianapolis, IN 46202, USA;
| | | |
Collapse
|
11
|
Mervak BM, Fried JG, Wasnik AP. A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging. Diagnostics (Basel) 2023; 13:2889. [PMID: 37761253 PMCID: PMC10529018 DOI: 10.3390/diagnostics13182889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/23/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
Artificial intelligence (AI) has been a topic of substantial interest for radiologists in recent years. Although many of the first clinical applications were in the neuro, cardiothoracic, and breast imaging subspecialties, the number of investigated and real-world applications of body imaging has been increasing, with more than 30 FDA-approved algorithms now available for applications in the abdomen and pelvis. In this manuscript, we explore some of the fundamentals of artificial intelligence and machine learning, review major functions that AI algorithms may perform, introduce current and potential future applications of AI in abdominal imaging, provide a basic understanding of the pathways by which AI algorithms can receive FDA approval, and explore some of the challenges with the implementation of AI in clinical practice.
Collapse
Affiliation(s)
| | | | - Ashish P. Wasnik
- Department of Radiology, University of Michigan—Michigan Medicine, 1500 E. Medical Center Dr., Ann Arbor, MI 48109, USA; (B.M.M.); (J.G.F.)
| |
Collapse
|
12
|
Demehri S, Baffour FI, Klein JG, Ghotbi E, Ibad HA, Moradi K, Taguchi K, Fritz J, Carrino JA, Guermazi A, Fishman EK, Zbijewski WB. Musculoskeletal CT Imaging: State-of-the-Art Advancements and Future Directions. Radiology 2023; 308:e230344. [PMID: 37606571 PMCID: PMC10477515 DOI: 10.1148/radiol.230344] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/28/2023] [Accepted: 05/15/2023] [Indexed: 08/23/2023]
Abstract
CT is one of the most widely used modalities for musculoskeletal imaging. Recent advancements in the field include the introduction of four-dimensional CT, which captures a CT image during motion; cone-beam CT, which uses flat-panel detectors to capture the lower extremities in weight-bearing mode; and dual-energy CT, which operates at two different x-ray potentials to improve the contrast resolution to facilitate the assessment of tissue material compositions such as tophaceous gout deposits and bone marrow edema. Most recently, photon-counting CT (PCCT) has been introduced. PCCT is a technique that uses photon-counting detectors to produce an image with higher spatial and contrast resolution than conventional multidetector CT systems. In addition, postprocessing techniques such as three-dimensional printing and cinematic rendering have used CT data to improve the generation of both physical and digital anatomic models. Last, advancements in the application of artificial intelligence to CT imaging have enabled the automatic evaluation of musculoskeletal pathologies. In this review, the authors discuss the current state of the above CT technologies, their respective advantages and disadvantages, and their projected future directions for various musculoskeletal applications.
Collapse
Affiliation(s)
- Shadpour Demehri
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Francis I. Baffour
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Joshua G. Klein
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Elena Ghotbi
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Hamza Ahmed Ibad
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Kamyar Moradi
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Katsuyuki Taguchi
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Jan Fritz
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - John A. Carrino
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Ali Guermazi
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Elliot K. Fishman
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| | - Wojciech B. Zbijewski
- From the Russell H. Morgan Department of Radiology and Radiological
Science (S.D., J.G.K., E.G., H.A.I., K.M., K.T., E.K.F.) and Department of
Biomedical Engineering (W.B.Z.), Johns Hopkins University School of Medicine,
601 N Carolina St, Baltimore, MD 21287; Division of Musculoskeletal Imaging,
Department of Radiology, Mayo Clinic, Rochester, Minn (F.I.B.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.);
Department of Radiology and Imaging, Hospital for Special Surgery, New York, NY
(J.A.C.); and Department of Radiology, Quantitative Imaging Center, Boston
University School of Medicine, Boston, Mass (A.G.)
| |
Collapse
|
13
|
Dreizin D, Staziaki PV, Khatri GD, Beckmann NM, Feng Z, Liang Y, Delproposto ZS, Klug M, Spann JS, Sarkar N, Fu Y. Artificial intelligence CAD tools in trauma imaging: a scoping review from the American Society of Emergency Radiology (ASER) AI/ML Expert Panel. Emerg Radiol 2023; 30:251-265. [PMID: 36917287 PMCID: PMC10640925 DOI: 10.1007/s10140-023-02120-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
BACKGROUND AI/ML CAD tools can potentially improve outcomes in the high-stakes, high-volume model of trauma radiology. No prior scoping review has been undertaken to comprehensively assess tools in this subspecialty. PURPOSE To map the evolution and current state of trauma radiology CAD tools along key dimensions of technology readiness. METHODS Following a search of databases, abstract screening, and full-text document review, CAD tool maturity was charted using elements of data curation, performance validation, outcomes research, explainability, user acceptance, and funding patterns. Descriptive statistics were used to illustrate key trends. RESULTS A total of 4052 records were screened, and 233 full-text articles were selected for content analysis. Twenty-one papers described FDA-approved commercial tools, and 212 reported algorithm prototypes. Works ranged from foundational research to multi-reader multi-case trials with heterogeneous external data. Scalable convolutional neural network-based implementations increased steeply after 2016 and were used in all commercial products; however, options for explainability were narrow. Of FDA-approved tools, 9/10 performed detection tasks. Dataset sizes ranged from < 100 to > 500,000 patients, and commercialization coincided with public dataset availability. Cross-sectional torso datasets were uniformly small. Data curation methods with ground truth labeling by independent readers were uncommon. No papers assessed user acceptance, and no method included human-computer interaction. The USA and China had the highest research output and frequency of research funding. CONCLUSIONS Trauma imaging CAD tools are likely to improve patient care but are currently in an early stage of maturity, with few FDA-approved products for a limited number of uses. The scarcity of high-quality annotated data remains a major barrier.
Collapse
Affiliation(s)
- David Dreizin
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| | - Pedro V Staziaki
- Cardiothoracic Imaging, Department of Radiology, Larner College of Medicine, University of Vermont, Burlington, VT, USA
| | - Garvit D Khatri
- Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Nicholas M Beckmann
- Memorial Hermann Orthopedic & Spine Hospital, McGovern Medical School at UTHealth, Houston, TX, USA
| | - Zhaoyong Feng
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Zachary S Delproposto
- Division of Emergency Radiology, Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - J Stephen Spann
- Department of Radiology, University of Alabama at Birmingham Heersink School of Medicine, Birmingham, AL, USA
| | - Nathan Sarkar
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Yunting Fu
- Health Sciences and Human Services Library, University of Maryland, Baltimore, Baltimore, MD, USA
| |
Collapse
|
14
|
Martín-Noguerol T, Oñate Miranda M, Amrhein TJ, Paulano-Godino F, Xiberta P, Vilanova JC, Luna A. The role of Artificial intelligence in the assessment of the spine and spinal cord. Eur J Radiol 2023; 161:110726. [PMID: 36758280 DOI: 10.1016/j.ejrad.2023.110726] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/13/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) application development is underway in all areas of radiology where many promising tools are focused on the spine and spinal cord. In the past decade, multiple spine AI algorithms have been created based on radiographs, computed tomography, and magnetic resonance imaging. These algorithms have wide-ranging purposes including automatic labeling of vertebral levels, automated description of disc degenerative changes, detection and classification of spine trauma, identification of osseous lesions, and the assessment of cord pathology. The overarching goals for these algorithms include improved patient throughput, reducing radiologist workload burden, and improving diagnostic accuracy. There are several pre-requisite tasks required in order to achieve these goals, such as automatic image segmentation, facilitating image acquisition and postprocessing. In this narrative review, we discuss some of the important imaging AI solutions that have been developed for the assessment of the spine and spinal cord. We focus on their practical applications and briefly discuss some key requirements for the successful integration of these tools into practice. The potential impact of AI in the imaging assessment of the spine and cord is vast and promises to provide broad reaching improvements for clinicians, radiologists, and patients alike.
Collapse
Affiliation(s)
| | - Marta Oñate Miranda
- Department of Radiology, Centre Hospitalier Universitaire de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Timothy J Amrhein
- Department of Radiology, Duke University Medical Center, Durham, USA.
| | | | - Pau Xiberta
- Graphics and Imaging Laboratory (GILAB), University of Girona, 17003 Girona, Spain.
| | - Joan C Vilanova
- Department of Radiology. Clinica Girona, Diagnostic Imaging Institute (IDI), University of Girona, 17002 Girona, Spain.
| | - Antonio Luna
- MRI unit, Radiology department. HT medica, Carmelo Torres n°2, 23007 Jaén, Spain.
| |
Collapse
|
15
|
Rui L, Li F, Chen C, E Y, Wang Y, Yuan Y, Li Y, Lu J, Huang S. Efficacy of a novel percutaneous pedicle screw fixation and vertebral reconstruction versus the traditional open pedicle screw fixation in the treatment of single-level thoracolumbar fracture without neurologic deficit. Front Surg 2023; 9:1039054. [PMID: 36684284 PMCID: PMC9852511 DOI: 10.3389/fsurg.2022.1039054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/07/2022] [Indexed: 01/08/2023] Open
Abstract
Objective The aim of this study was to compare the efficacy and safety of a novel percutaneous pedicle screw fixation and vertebral reconstruction (PPSR) vs. that of open pedicle screw fixation (OPSF) in the treatment of thoracolumbar fractures. Methods This retrospective study enrolled 153 patients who underwent PPSR and 176 patients who received OPSF. Periprocedural characteristics, radiographic parameters, and clinical outcomes were compared between the two groups. Results The operation duration was 93.843 ± 20.611 in PPSR group and 109.432 ± 11.903 in OPSF group; blood loss was 131.118 ± 23.673 in PPSR group and 442.163 ± 149.701 in OPSF group, incision length was 7.280 ± 1.289 in PPSR group and 14.527 ± 2.893 in OPSF group, postoperative stay was 8.732 ± 1.864 in PPSR group and 15.102 ± 2.117 in OPSF group, and total hospitalization costs were 59027.196 ± 8687.447 in PPSR group and 73144.432 ± 11747.567 in OPSF group. These results indicated that these parameters were significantly lower in PPSR compared with those in OPSF group. No significant difference was observed in the incidence of complications between the two groups. The radiographic parameters including height of the anterior vertebra, Cobb angle, and vertebral wedge angle were better in PPSR group than in OPSF group. Recovery rate of AVH was 0.449 ± 0.079 in PPSR group and 0.279 ± 0.088 in OPSF group. Analysis of clinical results revealed that during postoperative period, the VAS and ODI scores in PPSR group were lower than those in OPSF group. Conclusions Collectively, these results indicated that PPSR more effectively restored the height of anterior vertebra and alleviated local kyphosis compared with OPSF. Moreover, the VAS and ODI scores in PPSR group were better than those of OPSF group.
Collapse
Affiliation(s)
- Lining Rui
- Department of Spinal Surgery, WujinHospital of Traditional Chinese Medicine, Changzhou, China
| | - Fudong Li
- Department of Orthopaedic Surgery, Spine Center, Shanghai Changzheng Hospital, Naval Medical University, Shanghai, China
| | - Cao Chen
- Department of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Yuan E
- Department of Spinal Surgery, WujinHospital of Traditional Chinese Medicine, Changzhou, China
| | - Yuchen Wang
- Department of Sports Medicine, Wujin Hospital of Traditional Chinese Medicine, Changzhou, China
| | - Yanhong Yuan
- Department of Spinal Surgery, WujinHospital of Traditional Chinese Medicine, Changzhou, China
| | - Yunfeng Li
- Department of Spinal Surgery, WujinHospital of Traditional Chinese Medicine, Changzhou, China
| | - Jian Lu
- Department of Spinal Surgery, WujinHospital of Traditional Chinese Medicine, Changzhou, China
| | - Shengchang Huang
- Department of Spinal Surgery, WujinHospital of Traditional Chinese Medicine, Changzhou, China,Correspondence: Shengchang Huang
| |
Collapse
|
16
|
Zhang J, Liu F, Xu J, Zhao Q, Huang C, Yu Y, Yuan H. Automated detection and classification of acute vertebral body fractures using a convolutional neural network on computed tomography. Front Endocrinol (Lausanne) 2023; 14:1132725. [PMID: 37051194 PMCID: PMC10083489 DOI: 10.3389/fendo.2023.1132725] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 03/14/2023] [Indexed: 03/29/2023] Open
Abstract
BACKGROUND Acute vertebral fracture is usually caused by low-energy injury with osteoporosis and high-energy trauma. The AOSpine thoracolumbar spine injury classification system (AO classification) plays an important role in the diagnosis and treatment of the disease. The diagnosis and description of vertebral fractures according to the classification scheme requires a great deal of time and energy for radiologists. PURPOSE To design and validate a multistage deep learning system (multistage AO system) for the automatic detection, localization and classification of acute thoracolumbar vertebral body fractures according to AO classification on computed tomography. MATERIALS AND METHODS The CT images of 1,217 patients who came to our hospital from January 2015 to December 2019 were collected retrospectively. The fractures were marked and classified by 2 junior radiology residents according to the type A standard in the AO classification. Marked fracture sites included the upper endplate, lower endplate and posterior wall. When there were inconsistent opinions on classification labels, the final result was determined by a director radiologist. We integrated different networks into different stages of the overall framework. U-net and a graph convolutional neural network (U-GCN) are used to realize the location and classification of the thoracolumbar spine. Next, a classification network is used to detect whether the thoracolumbar spine has a fracture. In the third stage, we detect fractures in different parts of the thoracolumbar spine by using a multibranch output network and finally obtain the AO types. RESULTS The mean age of the patients was 61.87 years with a standard deviation of 17.04 years, consisting of 760 female patients and 457 male patients. On vertebrae level, sensitivity for fracture detection was 95.23% in test dataset, with an accuracy of 97.93% and a specificity of 98.35%. For the classification of vertebral body fractures, the balanced accuracy was 79.56%, with an AUC of 0.904 for type A1, 0.945 for type A2, 0.878 for type A3 and 0.942 for type A4. CONCLUSION The multistage AO system can automatically detect and classify acute vertebral body fractures in the thoracolumbar spine on CT images according to AO classification with high accuracy.
Collapse
Affiliation(s)
- Jianlun Zhang
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | | | | | - Qingqing Zhao
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | | | | | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, Beijing, China
- *Correspondence: Huishu Yuan,
| |
Collapse
|
17
|
Goedmakers C, Pereboom L, Schoones J, de Leeuw den Bouter M, Remis R, Staring M, Vleggeert-Lankamp C. Machine learning for image analysis in the cervical spine: Systematic review of the available models and methods. BRAIN & SPINE 2022; 2:101666. [PMID: 36506292 PMCID: PMC9729832 DOI: 10.1016/j.bas.2022.101666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 09/12/2022] [Accepted: 10/28/2022] [Indexed: 11/16/2022]
Abstract
•Neural network approaches show the most potential for automated image analysis of thecervical spine.•Fully automatic convolutional neural network (CNN) models are promising Deep Learning methods for segmentation.•In cervical spine analysis, the biomechanical features are most often studied using finiteelement models.•The application of artificial neural networks and support vector machine models looks promising for classification purposes.•This article provides an overview of the methods for research on computer aided imaging diagnostics of the cervical spine.
Collapse
Affiliation(s)
- C.M.W. Goedmakers
- Department of Neurosurgery, Leiden University Medical Center, Leiden, the Netherlands,Computational Neuroscience Outcomes Center, Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA,Corresponding author. Department of Neurosurgery, Albinusdreef 2, 2300 RC, Leiden, the Netherlands.
| | - L.M. Pereboom
- Faculty of Mechanical, Maritime and Materials Engineering (3mE), Delft University of Technology, Delft, the Netherlands
| | - J.W. Schoones
- Walaeus Library, Leiden University Medical Center, Leiden, the Netherlands
| | - M.L. de Leeuw den Bouter
- Delft Institute of Applied Mathematics, Department of Numerical Analysis, Delft University of Technology, Delft, the Netherlands
| | - R.F. Remis
- Circuits and Systems Group, Microelectronics Department, Delft University of Technology, Delft, the Netherlands
| | - M. Staring
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands,Intelligent Systems Department, Delft University of Technology, Delft, the Netherlands
| | - C.L.A. Vleggeert-Lankamp
- Department of Neurosurgery, Leiden University Medical Center, Leiden, the Netherlands,Department of Neurosurgery Haaglanden Medical Centre and HAGA Teaching Hospitals, The Hague, the Netherlands,Department of Neurosurgery, Spaarne Gasthuis Haarlem/Hoofddorp, the Netherlands
| |
Collapse
|
18
|
Torres-Lopez VM, Rovenolt GE, Olcese AJ, Garcia GE, Chacko SM, Robinson A, Gaiser E, Acosta J, Herman AL, Kuohn LR, Leary M, Soto AL, Zhang Q, Fatima S, Falcone GJ, Payabvash MS, Sharma R, Struck AF, Sheth KN, Westover MB, Kim JA. Development and Validation of a Model to Identify Critical Brain Injuries Using Natural Language Processing of Text Computed Tomography Reports. JAMA Netw Open 2022; 5:e2227109. [PMID: 35972739 PMCID: PMC9382443 DOI: 10.1001/jamanetworkopen.2022.27109] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 06/20/2022] [Indexed: 12/17/2022] Open
Abstract
Importance Clinical text reports from head computed tomography (CT) represent rich, incompletely utilized information regarding acute brain injuries and neurologic outcomes. CT reports are unstructured; thus, extracting information at scale requires automated natural language processing (NLP). However, designing new NLP algorithms for each individual injury category is an unwieldy proposition. An NLP tool that summarizes all injuries in head CT reports would facilitate exploration of large data sets for clinical significance of neuroradiological findings. Objective To automatically extract acute brain pathological data and their features from head CT reports. Design, Setting, and Participants This diagnostic study developed a 2-part named entity recognition (NER) NLP model to extract and summarize data on acute brain injuries from head CT reports. The model, termed BrainNERD, extracts and summarizes detailed brain injury information for research applications. Model development included building and comparing 2 NER models using a custom dictionary of terms, including lesion type, location, size, and age, then designing a rule-based decoder using NER outputs to evaluate for the presence or absence of injury subtypes. BrainNERD was evaluated against independent test data sets of manually classified reports, including 2 external validation sets. The model was trained on head CT reports from 1152 patients generated by neuroradiologists at the Yale Acute Brain Injury Biorepository. External validation was conducted using reports from 2 outside institutions. Analyses were conducted from May 2020 to December 2021. Main Outcomes and Measures Performance of the BrainNERD model was evaluated using precision, recall, and F1 scores based on manually labeled independent test data sets. Results A total of 1152 patients (mean [SD] age, 67.6 [16.1] years; 586 [52%] men), were included in the training set. NER training using transformer architecture and bidirectional encoder representations from transformers was significantly faster than spaCy. For all metrics, the 10-fold cross-validation performance was 93% to 99%. The final test performance metrics for the NER test data set were 98.82% (95% CI, 98.37%-98.93%) for precision, 98.81% (95% CI, 98.46%-99.06%) for recall, and 98.81% (95% CI, 98.40%-98.94%) for the F score. The expert review comparison metrics were 99.06% (95% CI, 97.89%-99.13%) for precision, 98.10% (95% CI, 97.93%-98.77%) for recall, and 98.57% (95% CI, 97.78%-99.10%) for the F score. The decoder test set metrics were 96.06% (95% CI, 95.01%-97.16%) for precision, 96.42% (95% CI, 94.50%-97.87%) for recall, and 96.18% (95% CI, 95.151%-97.16%) for the F score. Performance in external institution report validation including 1053 head CR reports was greater than 96%. Conclusions and Relevance These findings suggest that the BrainNERD model accurately extracted acute brain injury terms and their properties from head CT text reports. This freely available new tool could advance clinical research by integrating information in easily gathered head CT reports to expand knowledge of acute brain injury radiographic phenotypes.
Collapse
Affiliation(s)
| | | | - Angelo J. Olcese
- Department of Neurology, Yale University, New Haven, Connecticut
| | | | - Sarah M. Chacko
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Amber Robinson
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Edward Gaiser
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Julian Acosta
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Alison L. Herman
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Lindsey R. Kuohn
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Megan Leary
- Department of Neurology, Yale University, New Haven, Connecticut
| | | | - Qiang Zhang
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Safoora Fatima
- Department of Neurology, University of Wisconsin, Madison
| | - Guido J. Falcone
- Department of Neurology, Yale University, New Haven, Connecticut
| | | | - Richa Sharma
- Department of Neurology, Yale University, New Haven, Connecticut
| | - Aaron F. Struck
- Department of Neurology, University of Wisconsin, Madison
- William S Middleton Veterans Hospital, Madison, Wisconsin
| | - Kevin N. Sheth
- Department of Neurology, Yale University, New Haven, Connecticut
| | | | - Jennifer A. Kim
- Department of Neurology, Yale University, New Haven, Connecticut
| |
Collapse
|
19
|
Shelmerdine SC, White RD, Liu H, Arthurs OJ, Sebire NJ. Artificial intelligence for radiological paediatric fracture assessment: a systematic review. Insights Imaging 2022; 13:94. [PMID: 35657439 PMCID: PMC9166920 DOI: 10.1186/s13244-022-01234-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/12/2022] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Majority of research and commercial efforts have focussed on use of artificial intelligence (AI) for fracture detection in adults, despite the greater long-term clinical and medicolegal implications of missed fractures in children. The objective of this study was to assess the available literature regarding diagnostic performance of AI tools for paediatric fracture assessment on imaging, and where available, how this compares with the performance of human readers. MATERIALS AND METHODS MEDLINE, Embase and Cochrane Library databases were queried for studies published between 1 January 2011 and 2021 using terms related to 'fracture', 'artificial intelligence', 'imaging' and 'children'. Risk of bias was assessed using a modified QUADAS-2 tool. Descriptive statistics for diagnostic accuracies were collated. RESULTS Nine eligible articles from 362 publications were included, with most (8/9) evaluating fracture detection on radiographs, with the elbow being the most common body part. Nearly all articles used data derived from a single institution, and used deep learning methodology with only a few (2/9) performing external validation. Accuracy rates generated by AI ranged from 88.8 to 97.9%. In two of the three articles where AI performance was compared to human readers, sensitivity rates for AI were marginally higher, but this was not statistically significant. CONCLUSIONS Wide heterogeneity in the literature with limited information on algorithm performance on external datasets makes it difficult to understand how such tools may generalise to a wider paediatric population. Further research using a multicentric dataset with real-world evaluation would help to better understand the impact of these tools.
Collapse
Affiliation(s)
- Susan C. Shelmerdine
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK ,grid.464688.00000 0001 2300 7844Department of Clinical Radiology, St. George’s Hospital, London, UK
| | - Richard D. White
- grid.241103.50000 0001 0169 7725Department of Radiology, University Hospital of Wales, Cardiff, UK
| | - Hantao Liu
- grid.5600.30000 0001 0807 5670School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Owen J. Arthurs
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK
| | - Neil J. Sebire
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK
| |
Collapse
|
20
|
Müller TR, Solano M, Tsunemi MH. Accuracy of artificial intelligence software for the detection of confirmed pleural effusion in thoracic radiographs in dogs. Vet Radiol Ultrasound 2022; 63:573-579. [PMID: 35452142 DOI: 10.1111/vru.13089] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 01/22/2022] [Accepted: 01/23/2022] [Indexed: 11/30/2022] Open
Abstract
The use of artificial intelligence (AI) algorithms in diagnostic radiology is a developing area in veterinary medicine and may provide substantial benefit in many clinical settings. These range from timely image interpretation in the emergency setting when no boarded radiologist is available to allowing boarded radiologists to focus on more challenging cases that require complex medical decision making. Testing the performance of artificial intelligence (AI) software in veterinary medicine is at its early stages, and only a scant number of reports of validation of AI software have been published. The purpose of this study was to investigate the performance of an AI algorithm (Vetology AI® ) in the detection of pleural effusion in thoracic radiographs of dogs. In this retrospective, diagnostic case-controlled study, 62 canine patients were recruited. A control group of 21 dogs with normal thoracic radiographs and a sample group of 41 dogs with confirmed pleural effusion were selected from the electronic medical records at the Cummings School of Veterinary Medicine. The images were cropped to include only the area of interest (i.e., thorax). The software then classified images into those with pleural effusion and those without. The AI algorithm was able to determine the presence of pleural effusion with 88.7% accuracy (P < 0.05). The sensitivity and specificity were 90.2% and 81.8%, respectively (positive predictive value, 92.5%; negative predictive value, 81.8%). The application of this technology in the diagnostic interpretation of thoracic radiographs in veterinary medicine appears to be of value and warrants further investigation and testing.
Collapse
Affiliation(s)
- Thiago Rinaldi Müller
- Department Clinical Sciences, Tufts University Cummings School of Veterinary Medicine, North Grafton, Massachusetts, USA
| | - Mauricio Solano
- Department Clinical Sciences, Tufts University Cummings School of Veterinary Medicine, North Grafton, Massachusetts, USA
| | - Mirian Harumi Tsunemi
- Department of Biostatistics, São Paulo State University. R. Prof. Dr. Antônio Celso Wagner Zanin, São Paulo, Brazil
| |
Collapse
|
21
|
Adrien-Maxence H, Emilie B, Alois DLC, Michelle A, Kate A, Mylene A, David B, Marie DS, Jason F, Eric G, Séamus H, Kevin K, Alison L, Megan M, Hester M, Jaime RJ, Zhu X, Micaela Z, Federica M. Comparison of error rates between four pretrained DenseNet convolutional neural network models and 13 board-certified veterinary radiologists when evaluating 15 labels of canine thoracic radiographs. Vet Radiol Ultrasound 2022; 63:456-468. [PMID: 35137490 DOI: 10.1111/vru.13069] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/15/2021] [Accepted: 12/21/2021] [Indexed: 11/29/2022] Open
Abstract
Convolutional neural networks (CNNs) are commonly used as artificial intelligence (AI) tools for evaluating radiographs, but published studies testing their performance in veterinary patients are currently lacking. The purpose of this retrospective, secondary analysis, diagnostic accuracy study was to compare the error rates of four CNNs to the error rates of 13 veterinary radiologists for evaluating canine thoracic radiographs using an independent gold standard. Radiographs acquired at a referral institution were used to evaluate the four CNNs sharing a common architecture. Fifty radiographic studies were selected at random. The studies were evaluated independently by three board-certified veterinary radiologists for the presence or absence of 15 thoracic labels, thus creating the gold standard through the majority rule. The labels included "cardiovascular," "pulmonary," "pleural," "airway," and "other categories." The error rates for each of the CNNs and for 13 additional board-certified veterinary radiologists were calculated on those same studies. There was no statistical difference in the error rates among the four CNNs for the majority of the labels. However, the CNN's training method impacted the overall error rate for three of 15 labels. The veterinary radiologists had a statistically lower error rate than all four CNNs overall and for five labels (33%). There was only one label ("esophageal dilation") for which two CNNs were superior to the veterinary radiologists. Findings from the current study raise numerous questions that need to be addressed to further develop and standardize AI in the veterinary radiology environment and to optimize patient care.
Collapse
Affiliation(s)
- Hespel Adrien-Maxence
- Department of Small Animal Clinical Sciences, University of Tennessee, Knoxville, Tennessee, USA
| | | | | | - Acierno Michelle
- Michelle Acierno Veterinary Radiology Consulting, Kirkland, WA and Summit Veterinary Referral Center, Tacoma, Washington, USA
| | - Alexander Kate
- DMV Veterinary Center, Diagnostic Imaging, Montreal, Quebec, Canada
| | | | - Biller David
- Kansas State University College of Veterinary Medicine, Clinical Sciences, Manhattan, Kansas, USA
| | | | | | - Green Eric
- The Ohio State University, Veterinary Clinical Sciences, Columbus, Ohio, USA
| | - Hoey Séamus
- University College Dublin, Veterinary Diagnostic Imaging, Dublin, Ireland
| | | | - Lee Alison
- Mississippi State University College of Veterinary Medicine, Department of Clinical Sciences, Starkville, Mississippi, USA
| | - MacLellan Megan
- BluePearl, Veterinary Partners, Elden Prairie, Minnesota, USA
| | - McAllister Hester
- University College Dublin, Veterinary Diagnostic Imaging, Dublin, Ireland
| | | | - Xiaojuan Zhu
- Office of Information Technology, The University of Tennessee, Knoxville, Tennessee, USA
| | | | - Morandi Federica
- Department of Small Animal Clinical Sciences, University of Tennessee, Knoxville, Tennessee, USA
| |
Collapse
|
22
|
Automatic vertebrae localization and segmentation in CT with a two-stage Dense-U-Net. Sci Rep 2021; 11:22156. [PMID: 34772972 PMCID: PMC8589948 DOI: 10.1038/s41598-021-01296-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 10/26/2021] [Indexed: 11/09/2022] Open
Abstract
Automatic vertebrae localization and segmentation in computed tomography (CT) are fundamental for spinal image analysis and spine surgery with computer-assisted surgery systems. But they remain challenging due to high variation in spinal anatomy among patients. In this paper, we proposed a deep-learning approach for automatic CT vertebrae localization and segmentation with a two-stage Dense-U-Net. The first stage used a 2D-Dense-U-Net to localize vertebrae by detecting the vertebrae centroids with dense labels and 2D slices. The second stage segmented the specific vertebra within a region-of-interest identified based on the centroid using 3D-Dense-U-Net. Finally, each segmented vertebra was merged into a complete spine and resampled to original resolution. We evaluated our method on the dataset from the CSI 2014 Workshop with 6 metrics: location error (1.69 ± 0.78 mm), detection rate (100%) for vertebrae localization; the dice coefficient (0.953 ± 0.014), intersection over union (0.911 ± 0.025), Hausdorff distance (4.013 ± 2.128 mm), pixel accuracy (0.998 ± 0.001) for vertebrae segmentation. The experimental results demonstrated the efficiency of the proposed method. Furthermore, evaluation on the dataset from the xVertSeg challenge with location error (4.12 ± 2.31), detection rate (100%), dice coefficient (0.877 ± 0.035) shows the generalizability of our method. In summary, our solution localized the vertebrae successfully by detecting the centroids of vertebrae and implemented instance segmentation of vertebrae in the whole spine.
Collapse
|
23
|
A computed tomography vertebral segmentation dataset with anatomical variations and multi-vendor scanner data. Sci Data 2021; 8:284. [PMID: 34711848 PMCID: PMC8553749 DOI: 10.1038/s41597-021-01060-0] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023] Open
Abstract
With the advent of deep learning algorithms, fully automated radiological image analysis is within reach. In spine imaging, several atlas- and shape-based as well as deep learning segmentation algorithms have been proposed, allowing for subsequent automated analysis of morphology and pathology. The first “Large Scale Vertebrae Segmentation Challenge” (VerSe 2019) showed that these perform well on normal anatomy, but fail in variants not frequently present in the training dataset. Building on that experience, we report on the largely increased VerSe 2020 dataset and results from the second iteration of the VerSe challenge (MICCAI 2020, Lima, Peru). VerSe 2020 comprises annotated spine computed tomography (CT) images from 300 subjects with 4142 fully visualized and annotated vertebrae, collected across multiple centres from four different scanner manufacturers, enriched with cases that exhibit anatomical variants such as enumeration abnormalities (n = 77) and transitional vertebrae (n = 161). Metadata includes vertebral labelling information, voxel-level segmentation masks obtained with a human-machine hybrid algorithm and anatomical ratings, to enable the development and benchmarking of robust and accurate segmentation algorithms. Measurement(s) | vertebra | Technology Type(s) | computed tomography | Factor Type(s) | imaging centre • scanner manufacturer | Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.14716968
Collapse
|
24
|
Ibanez V, Gunz S, Erne S, Rawdon EJ, Ampanozi G, Franckenberg S, Sieberth T, Affolter R, Ebert LC, Dobay A. RiFNet: Automated rib fracture detection in postmortem computed tomography. Forensic Sci Med Pathol 2021; 18:20-29. [PMID: 34709561 PMCID: PMC8921053 DOI: 10.1007/s12024-021-00431-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2021] [Indexed: 12/31/2022]
Abstract
Imaging techniques are widely used for medical diagnostics. In some cases, a lack of medical practitioners who can manually analyze the images can lead to a bottleneck. Consequently, we developed a custom-made convolutional neural network (RiFNet = Rib Fracture Network) that can detect rib fractures in postmortem computed tomography. In a retrospective cohort study, we retrieved PMCT data from 195 postmortem cases with rib fractures from July 2017 to April 2018 from our database. The computed tomography data were prepared using a plugin in the commercial imaging software Syngo.via whereby the rib cage was unfolded on a single-in-plane image reformation. Out of the 195 cases, a total of 585 images were extracted and divided into two groups labeled "with" and "without" fractures. These two groups were subsequently divided into training, validation, and test datasets to assess the performance of RiFNet. In addition, we explored the possibility of applying transfer learning techniques on our dataset by choosing two independent noncommercial off-the-shelf convolutional neural network architectures (ResNet50 V2 and Inception V3) and compared the performances of those two with RiFNet. When using pre-trained convolutional neural networks, we achieved an F1 score of 0.64 with Inception V3 and an F1 score of 0.61 with ResNet50 V2. We obtained an average F1 score of 0.91 ± 0.04 with RiFNet. RiFNet is efficient in detecting rib fractures on postmortem computed tomography. Transfer learning techniques are not necessarily well adapted to make classifications in postmortem computed tomography.
Collapse
Affiliation(s)
- Victor Ibanez
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Samuel Gunz
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Svenja Erne
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Eric J Rawdon
- Department of Mathematics, University of St. Thomas, St. Paul, Minnesota, 55105-1079, USA
| | - Garyfalia Ampanozi
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Sabine Franckenberg
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland.,Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Till Sieberth
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Raffael Affolter
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Lars C Ebert
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland
| | - Akos Dobay
- Zurich Institute of Forensic Medicine, University of Zurich, Winterthurerstrasse 190/52, CH-8057, Zurich, Switzerland.
| |
Collapse
|
25
|
Hameed BMZ, Prerepa G, Patil V, Shekhar P, Zahid Raza S, Karimi H, Paul R, Naik N, Modi S, Vigneswaran G, Prasad Rai B, Chłosta P, Somani BK. Engineering and clinical use of artificial intelligence (AI) with machine learning and data science advancements: radiology leading the way for future. Ther Adv Urol 2021; 13:17562872211044880. [PMID: 34567272 PMCID: PMC8458681 DOI: 10.1177/17562872211044880] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 08/21/2021] [Indexed: 12/29/2022] Open
Abstract
Over the years, many clinical and engineering methods have been adapted for testing and screening for the presence of diseases. The most commonly used methods for diagnosis and analysis are computed tomography (CT) and X-ray imaging. Manual interpretation of these images is the current gold standard but can be subject to human error, is tedious, and is time-consuming. To improve efficiency and productivity, incorporating machine learning (ML) and deep learning (DL) algorithms could expedite the process. This article aims to review the role of artificial intelligence (AI) and its contribution to data science as well as various learning algorithms in radiology. We will analyze and explore the potential applications in image interpretation and radiological advances for AI. Furthermore, we will discuss the usage, methodology implemented, future of these concepts in radiology, and their limitations and challenges.
Collapse
Affiliation(s)
- B M Zeeshan Hameed
- Department of Urology, Father Muller Medical College, Mangalore, Karnataka, India
| | - Gayathri Prerepa
- Department of Electronics and Communication, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Vathsala Patil
- Department of Oral Medicine and Radiology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Pranav Shekhar
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Syed Zahid Raza
- Department of Urology, Dr. B.R. Ambedkar Medical College, Bengaluru, Karnataka, India
| | - Hadis Karimi
- Manipal College of Pharmaceutical Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nithesh Naik
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| | - Sachin Modi
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Ganesh Vigneswaran
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Bhavan Prasad Rai
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group Manipal, India
| | - Piotr Chłosta
- Department of Urology, Jagiellonian University in Kraków, Kraków, Poland
| | - Bhaskar K Somani
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| |
Collapse
|
26
|
Voter A, Larson M, Garrett J, Yu JP. Diagnostic Accuracy and Failure Mode Analysis of a Deep Learning Algorithm for the Detection of Cervical Spine Fractures. AJNR Am J Neuroradiol 2021; 42:1550-1556. [PMID: 34117018 PMCID: PMC8367597 DOI: 10.3174/ajnr.a7179] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 03/14/2021] [Indexed: 01/16/2023]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence decision support systems are a rapidly growing class of tools to help manage ever-increasing imaging volumes. The aim of this study was to evaluate the performance of an artificial intelligence decision support system, Aidoc, for the detection of cervical spinal fractures on noncontrast cervical spine CT scans and to conduct a failure mode analysis to identify areas of poor performance. MATERIALS AND METHODS This retrospective study included 1904 emergent noncontrast cervical spine CT scans of adult patients (60 [SD, 22] years, 50.3% men). The presence of cervical spinal fracture was determined by Aidoc and an attending neuroradiologist; discrepancies were independently adjudicated. Algorithm performance was assessed by calculation of the diagnostic accuracy, and a failure mode analysis was performed. RESULTS Aidoc and the neuroradiologist's interpretation were concordant in 91.5% of cases. Aidoc correctly identified 67 of 122 fractures (54.9%) with 106 false-positive flagged studies. Diagnostic performance was calculated as the following: sensitivity, 54.9% (95% CI, 45.7%-63.9%); specificity, 94.1% (95% CI, 92.9%-95.1%); positive predictive value, 38.7% (95% CI, 33.1%-44.7%); and negative predictive value, 96.8% (95% CI, 96.2%-97.4%). Worsened performance was observed in the detection of chronic fractures; differences in diagnostic performance were not altered by study indication or patient characteristics. CONCLUSIONS We observed poor diagnostic accuracy of an artificial intelligence decision support system for the detection of cervical spine fractures. Many similar algorithms have also received little or no external validation, and this study raises concerns about their generalizability, utility, and rapid pace of deployment. Further rigorous evaluations are needed to understand the weaknesses of these tools before widespread implementation.
Collapse
Affiliation(s)
- A.F. Voter
- School of Medicine and Public Health (A.F.V.), University of Wisconsin-Madison, Madison, Wisconsin
| | - M.E. Larson
- Department of Radiology (M.E.L., J.W.G., J.-P.J.Y.), University of Wisconsin-Madison, Madison, Wisconsin
| | - J.W. Garrett
- Department of Radiology (M.E.L., J.W.G., J.-P.J.Y.), University of Wisconsin-Madison, Madison, Wisconsin
| | - J.-P.J. Yu
- Department of Radiology (M.E.L., J.W.G., J.-P.J.Y.), University of Wisconsin-Madison, Madison, Wisconsin,Department of Biomedical Engineering (J.-P.J.Y.), College of Engineering, University of Wisconsin-Madison, Madison, Wisconsin,Department of Psychiatry (J.-P.J.Y.), University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| |
Collapse
|
27
|
Abstract
Artificial intelligence is an exciting and growing field in medicine to assist in the proper diagnosis of patients. Although the use of artificial intelligence in orthopedics is currently limited, its utility in other fields has been extremely valuable and could be useful in orthopedics, especially spine care. Automated systems have the ability to analyze complex patterns and images, which will allow for enhanced analysis of imaging. Although the potential impact of artificial intelligence integration into spine care is promising, there are several limitations that must be overcome. Our goal is to review current advances that machine learning has been used for in orthopedics, and discuss potential application to spine care in the clinical setting in which there is a need for the development of automated systems.
Collapse
|
28
|
Musa Aguiar P, Zarantonello P, Aparisi Gómez MP. Differentiation Between Osteoporotic And Neoplastic Vertebral Fractures: State Of The Art And Future Perspectives. Curr Med Imaging 2021; 18:187-207. [PMID: 33845727 DOI: 10.2174/1573405617666210412142758] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 02/25/2021] [Accepted: 02/26/2021] [Indexed: 11/22/2022]
Abstract
Vertebral fractures are a common condition, occurring in the context of osteoporosis and malignancy. These entities affect a group of patients in the same age range; clinical features may be indistinct and symptoms non-existing, and thus present challenges to diagnosis. In this article, we review the use and accuracy of different imaging modalities available to characterize vertebral fracture etiology, from well-established classical techniques, to the role of new and advanced imaging techniques, and the prospective use of artificial intelligence. We also address the role of imaging on treatment. In the context of osteoporosis, the importance of opportunistic diagnosis is highlighted. In the near future, the use of automated computer-aided diagnostic algorithms applied to different imaging techniques may be really useful to aid on diagnosis.
Collapse
Affiliation(s)
- Paula Musa Aguiar
- Serdil, Clinica de Radiologia e Diagnóstico por Imagem; R. São Luís, 96 - Santana, Porto Alegre - RS, 90620-170. Brazil
| | - Paola Zarantonello
- Department of paediatric orthopedics and traumatology, IRCCS Istituto Ortopedico Rizzoli; Via G. C. Pupilli 1, 40136 Bologna. Italy
| | | |
Collapse
|
29
|
Merali ZA, Colak E, Wilson JR. Applications of Machine Learning to Imaging of Spinal Disorders: Current Status and Future Directions. Global Spine J 2021; 11:23S-29S. [PMID: 33890805 PMCID: PMC8076811 DOI: 10.1177/2192568220961353] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
STUDY DESIGN Narrative review. OBJECTIVES We aim to describe current progress in the application of artificial intelligence and machine learning technology to provide automated analysis of imaging in patients with spinal disorders. METHODS A literature search utilizing the PubMed database was performed. Relevant studies from all the evidence levels have been included. RESULTS Within spine surgery, artificial intelligence and machine learning technologies have achieved near-human performance in narrow image classification tasks on specific datasets in spinal degenerative disease, spinal deformity, spine trauma, and spine oncology. CONCLUSION Although substantial challenges remain to be overcome it is clear that artificial intelligence and machine learning technology will influence the practice of spine surgery in the future.
Collapse
Affiliation(s)
- Zamir A. Merali
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Errol Colak
- Department of Medical Imaging, University of Toronto, St. Michael’s Hospital, 30 Bond St, Toronto, ON, M5B 1W8, Canada
| | - Jefferson R. Wilson
- Department of Surgery, University of Toronto, Toronto, Ontario, Canada
- Department of Neurosurgery, St. Michael’s Hospital, Toronto, Ontario, Canada
| |
Collapse
|
30
|
Weikert T, Noordtzij LA, Bremerich J, Stieltjes B, Parmar V, Cyriac J, Sommer G, Sauter AW. Assessment of a Deep Learning Algorithm for the Detection of Rib Fractures on Whole-Body Trauma Computed Tomography. Korean J Radiol 2020; 21:891-899. [PMID: 32524789 PMCID: PMC7289702 DOI: 10.3348/kjr.2019.0653] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/12/2020] [Accepted: 02/19/2020] [Indexed: 12/03/2022] Open
Abstract
Objective To assess the diagnostic performance of a deep learning-based algorithm for automated detection of acute and chronic rib fractures on whole-body trauma CT. Materials and Methods We retrospectively identified all whole-body trauma CT scans referred from the emergency department of our hospital from January to December 2018 (n = 511). Scans were categorized as positive (n = 159) or negative (n = 352) for rib fractures according to the clinically approved written CT reports, which served as the index test. The bone kernel series (1.5-mm slice thickness) served as an input for a detection prototype algorithm trained to detect both acute and chronic rib fractures based on a deep convolutional neural network. It had previously been trained on an independent sample from eight other institutions (n = 11455). Results All CTs except one were successfully processed (510/511). The algorithm achieved a sensitivity of 87.4% and specificity of 91.5% on a per-examination level [per CT scan: rib fracture(s): yes/no]. There were 0.16 false-positives per examination (= 81/510). On a per-finding level, there were 587 true-positive findings (sensitivity: 65.7%) and 307 false-negatives. Furthermore, 97 true rib fractures were detected that were not mentioned in the written CT reports. A major factor associated with correct detection was displacement. Conclusion We found good performance of a deep learning-based prototype algorithm detecting rib fractures on trauma CT on a per-examination level at a low rate of false-positives per case. A potential area for clinical application is its use as a screening tool to avoid false-negative radiology reports.
Collapse
Affiliation(s)
- Thomas Weikert
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland.
| | - Luca Andre Noordtzij
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Jens Bremerich
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Bram Stieltjes
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Victor Parmar
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Joshy Cyriac
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Gregor Sommer
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Alexander Walter Sauter
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| |
Collapse
|
31
|
Draelos RL, Dov D, Mazurowski MA, Lo JY, Henao R, Rubin GD, Carin L. Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes. Med Image Anal 2020; 67:101857. [PMID: 33129142 DOI: 10.1016/j.media.2020.101857] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 09/15/2020] [Accepted: 09/18/2020] [Indexed: 12/11/2022]
Abstract
Machine learning models for radiology benefit from large-scale data sets with high quality labels for abnormalities. We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is the largest multiply-annotated volumetric medical imaging data set reported. To annotate this data set, we developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports with an average F-score of 0.976 (min 0.941, max 1.0). We also developed a model for multi-organ, multi-disease classification of chest CT volumes that uses a deep convolutional neural network (CNN). This model reached a classification performance of AUROC >0.90 for 18 abnormalities, with an average AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data. We show that training on more labels improves performance significantly: for a subset of 9 labels - nodule, opacity, atelectasis, pleural effusion, consolidation, mass, pericardial effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased by 10% when the number of training labels was increased from 9 to all 83. All code for volume preprocessing, automated label extraction, and the volume abnormality prediction model is publicly available. The 36,316 CT volumes and labels will also be made publicly available pending institutional approval.
Collapse
Affiliation(s)
- Rachel Lea Draelos
- Computer Science Department, Duke University, LSRC Building D101, 308 Research Drive, Duke Box 90129, Durham, North Carolina 27708-0129, United States of America; School of Medicine, Duke University, DUMC 3710, Durham, North Carolina 27710, United States of America.
| | - David Dov
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America
| | - Maciej A Mazurowski
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America; Biostatistics and Bioinformatics Department, Duke University, DUMC 2424 Erwin Road, Suite 1102 Hock Plaza, Box 2721 Durham, North Carolina 27710, United States of America
| | - Joseph Y Lo
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America; Biomedical Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Room 1427, Fitzpatrick Center (FCIEMAS), 101 Science Drive, Campus Box 90281, Durham, North Carolina 27708-0281, United States of America
| | - Ricardo Henao
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Biostatistics and Bioinformatics Department, Duke University, DUMC 2424 Erwin Road, Suite 1102 Hock Plaza, Box 2721 Durham, North Carolina 27710, United States of America
| | - Geoffrey D Rubin
- Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America
| | - Lawrence Carin
- Computer Science Department, Duke University, LSRC Building D101, 308 Research Drive, Duke Box 90129, Durham, North Carolina 27708-0129, United States of America; Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Statistical Science Department, Duke University, Box 90251, Durham, North Carolina 27708-0251, United States of America
| |
Collapse
|
32
|
Blum A, Gillet R, Rauch A, Urbaneja A, Biouichi H, Dodin G, Germain E, Lombard C, Jaquet P, Louis M, Simon L, Gondim Teixeira P. 3D reconstructions, 4D imaging and postprocessing with CT in musculoskeletal disorders: Past, present and future. Diagn Interv Imaging 2020; 101:693-705. [PMID: 33036947 DOI: 10.1016/j.diii.2020.09.008] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/12/2020] [Accepted: 09/15/2020] [Indexed: 12/30/2022]
Abstract
Three-dimensional (3D) imaging and post processing are common tasks used daily in many disciplines. The purpose of this article is to review the new postprocessing tools available. Although 3D imaging can be applied to all anatomical regions and used with all imaging techniques, its most varied and relevant applications are found with computed tomography (CT) data in musculoskeletal imaging. These new applications include global illumination rendering (GIR), unfolded rib reformations, subtracted CT angiography for bone analysis, dynamic studies, temporal subtraction and image fusion. In all of these tasks, registration and segmentation are two basic processes that affect the quality of the results. GIR simulates the complete interaction of photons with the scanned object, providing photorealistic volume rendering. Reformations to unfold the rib cage allow more accurate and faster diagnosis of rib lesions. Dynamic CT can be applied to cinematic joint evaluations a well as to perfusion and angiographic studies. Finally, more traditional techniques, such as minimum intensity projection, might find new applications for bone evaluation with the advent of ultra-high-resolution CT scanners. These tools can be used synergistically to provide morphologic, topographic and functional information and increase the versatility of CT.
Collapse
Affiliation(s)
- A Blum
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France; Unité INSERM U1254 Imagerie Adaptative Diagnostique et Interventionnelle (IADI), CHRU of Nancy, 54511 Vandœuvre-lès-Nancy, France.
| | - R Gillet
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - A Rauch
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - A Urbaneja
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - H Biouichi
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - G Dodin
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - E Germain
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - C Lombard
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - P Jaquet
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - M Louis
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - L Simon
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France
| | - P Gondim Teixeira
- Guilloz Imaging Department, CHRU of Nancy, 54000 Nancy, France; Unité INSERM U1254 Imagerie Adaptative Diagnostique et Interventionnelle (IADI), CHRU of Nancy, 54511 Vandœuvre-lès-Nancy, France
| |
Collapse
|
33
|
Boissady E, de La Comble A, Zhu X, Hespel AM. Artificial intelligence evaluating primary thoracic lesions has an overall lower error rate compared to veterinarians or veterinarians in conjunction with the artificial intelligence. Vet Radiol Ultrasound 2020; 61:619-627. [PMID: 32996208 DOI: 10.1111/vru.12912] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/30/2020] [Accepted: 07/12/2020] [Indexed: 12/11/2022] Open
Abstract
To date, deep learning technologies have provided powerful decision support systems to radiologists in human medicine. The aims of this retrospective, exploratory study were to develop and describe an artificial intelligence able to screen thoracic radiographs for primary thoracic lesions in feline and canine patients. Three deep learning networks using three different pretraining strategies to predict 15 types of primary thoracic lesions were created (including tracheal collapse, left atrial enlargement, alveolar pattern, pneumothorax, and pulmonary mass). Upon completion of pretraining, the algorithms were provided with over 22 000 thoracic veterinary radiographs for specific training. All radiographs had a report created by a board-certified veterinary radiologist used as the gold standard. The performances of all three networks were compared to one another. An additional 120 radiographs were then evaluated by three types of observers: the best performing network, veterinarians, and veterinarians aided by the network. The error rates for each of the observers was calculated as an overall and for the 15 labels and were compared using a McNemar's test. The overall error rate of the network was significantly better than the overall error rate of the veterinarians or the veterinarians aided by the network (10.7% vs 16.8% vs17.2%, P = .001). The network's error rate was significantly better to detect cardiac enlargement and for bronchial pattern. The current network only provides help in detecting various lesion types and does not provide a diagnosis. Based on its overall very good performance, this could be used as an aid to general practitioners while waiting for the radiologist's report.
Collapse
Affiliation(s)
| | | | - Xiaojuan Zhu
- Office of Information Technology, The University of Tennessee, Knoxville, Tennessee, USA
| | - Adrien-Maxence Hespel
- Department of Small Animal Clinical Science, University of Tennessee, Knoxville, Tennessee, USA
| |
Collapse
|
34
|
Löffler MT, Sekuboyina A, Jacob A, Grau AL, Scharr A, El Husseini M, Kallweit M, Zimmer C, Baum T, Kirschke JS. A Vertebral Segmentation Dataset with Fracture Grading. Radiol Artif Intell 2020; 2:e190138. [PMID: 33937831 PMCID: PMC8082364 DOI: 10.1148/ryai.2020190138] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 02/24/2020] [Accepted: 03/04/2020] [Indexed: 04/21/2023]
Abstract
Published under a CC BY 4.0 license. Supplemental material is available for this article.
Collapse
Affiliation(s)
- Maximilian T. Löffler
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Anjany Sekuboyina
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Alina Jacob
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Anna-Lena Grau
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Andreas Scharr
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Malek El Husseini
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Mareike Kallweit
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Claus Zimmer
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Thomas Baum
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| | - Jan S. Kirschke
- From the Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str 22, Munich 81675, Germany (M.T.L., A. Sekuboyina, A.J., A.L.G., A. Scharr, M.E.H., M.K., C.Z., T.B., J.S.K.); and Department of Informatics, Technical University of Munich, Munich, Germany (A. Sekuboyina)
| |
Collapse
|
35
|
Using a Dual-Input Convolutional Neural Network for Automated Detection of Pediatric Supracondylar Fracture on Conventional Radiography. Invest Radiol 2020; 55:101-110. [DOI: 10.1097/rli.0000000000000615] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
36
|
Burns JE, Yao J, Summers RM. Artificial Intelligence in Musculoskeletal Imaging: A Paradigm Shift. J Bone Miner Res 2020; 35:28-35. [PMID: 31398274 DOI: 10.1002/jbmr.3849] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Revised: 07/23/2019] [Accepted: 08/05/2019] [Indexed: 01/22/2023]
Abstract
Artificial intelligence is upending many of our assumptions about the ability of computers to detect and diagnose diseases on medical images. Deep learning, a recent innovation in artificial intelligence, has shown the ability to interpret medical images with sensitivities and specificities at or near that of skilled clinicians for some applications. In this review, we summarize the history of artificial intelligence, present some recent research advances, and speculate about the potential revolutionary clinical impact of the latest computer techniques for bone and muscle imaging. © 2019 American Society for Bone and Mineral Research. Published 2019. This article is a U.S. Government work and is in the public domain in the USA.
Collapse
Affiliation(s)
- Joseph E Burns
- Department of Radiological Sciences, University of California-Irvine School of Medicine, Orange, CA, USA
| | - Jianhua Yao
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences Department, Clinical Center, National Institutes of Health, Bethesda, MD, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences Department, Clinical Center, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
37
|
Evaluation of an AI-Based Detection Software for Acute Findings in Abdominal Computed Tomography Scans: Toward an Automated Work List Prioritization of Routine CT Examinations. Invest Radiol 2019; 54:55-59. [PMID: 30199417 DOI: 10.1097/rli.0000000000000509] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
OBJECTIVE The aim of this study was to test the diagnostic performance of a deep learning-based triage system for the detection of acute findings in abdominal computed tomography (CT) examinations. MATERIALS AND METHODS Using a RIS/PACS (Radiology Information System/Picture Archiving and Communication System) search engine, we obtained 100 consecutive abdominal CTs with at least one of the following findings: free-gas, free-fluid, or fat-stranding and 100 control cases with absence of these findings. The CT data were analyzed using a convolutional neural network algorithm previously trained for detection of these findings on an independent sample. The validation of the results was performed on a Web-based feedback system by a radiologist with 1 year of experience in abdominal imaging without prior knowledge of image findings through both visual confirmation and comparison with the clinically approved, written report as the standard of reference. All cases were included in the final analysis, except those in which the whole dataset could not be processed by the detection software. Measures of diagnostic accuracy were then calculated. RESULTS A total of 194 cases were included in the analysis, 6 excluded because of technical problems during the extraction of the DICOM datasets from the local PACS. Overall, the algorithm achieved a 93% sensitivity (91/98, 7 false-negative) and 97% specificity (93/96, 3 false-positive) in the detection of acute abdominal findings. Intra-abdominal free gas was detected with a 92% sensitivity (54/59) and 93% specificity (39/42), free fluid with a 85% sensitivity (68/80) and 95% specificity (20/21), and fat stranding with a 81% sensitivity (42/50) and 98% specificity (48/49). False-positive results were due to streak artifacts, partial volume effects, and a misidentification of a diverticulum (each n = 1). CONCLUSIONS The algorithm's autonomous detection of acute pathological abdominal findings demonstrated a high diagnostic performance, enabling guidance of the radiology workflow toward prioritization of abdominal CT examinations with acute conditions.
Collapse
|
38
|
Vogl TJ, Eichler K, Marzi I, Wutzler S, Zacharowski K, Frellessen C. [Imaging techniques in modern trauma diagnostics]. Med Klin Intensivmed Notfmed 2019; 112:643-657. [PMID: 28936574 DOI: 10.1007/s00063-017-0359-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Modern trauma room management requires interdisciplinary teamwork and synchronous communication between a team of anaesthesists, surgeons and radiologists. As the length of stay in the trauma room influences morbidity and mortality of a severely injured person, optimizing time is one of the main targets. With the direct involvement of modern imaging techniques the injuries caused by trauma should be detected within a very short period of time in order to enable a priority-orientated treatment. Radiology influences structure and process quality, management and development of trauma room algorithms regarding the use of imaging techniques. For the individual case interventional therapy methods can be added. Based on current data and on the Frankfurt experience the current diagnostic concepts of trauma diagnostics are presented.
Collapse
Affiliation(s)
- T J Vogl
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Theodor-Stern-Kai 7, 60590, Frankfurt, Deutschland.
| | - K Eichler
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Theodor-Stern-Kai 7, 60590, Frankfurt, Deutschland
| | - I Marzi
- Zentrum der Chirurgie, Klinik für Unfall-, Hand- und Wiederherstellungschirurgie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Frankfurt, Deutschland
| | - S Wutzler
- Zentrum der Chirurgie, Klinik für Unfall-, Hand- und Wiederherstellungschirurgie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Frankfurt, Deutschland
| | - K Zacharowski
- Klinik für Anästhesiologie, Intensivmedizin und Schmerztherapie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Frankfurt, Deutschland
| | - C Frellessen
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Theodor-Stern-Kai 7, 60590, Frankfurt, Deutschland
| |
Collapse
|
39
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
40
|
Deng D, Lian Z, Cui W, Liang H, Xiao L, Yao G. Function of low back muscle exercise : Preventive effect of refracture analysis of postoperative vertebral fractures. DER ORTHOPADE 2019; 48:337-342. [PMID: 29704016 DOI: 10.1007/s00132-018-3577-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
BACKGROUND Low back muscles exercise reportedly influence the risk of osteoporotic vertebral fractures. The exact relationship between the low back muscles exercise and the incidence of vertebral refractures remain unclear. OBJECTIVE To investigate the ability of exercise to strengthen the low back muscles to prevent vertebral refracture after surgery, through clinical analysis of the vertebral fracture risk reduction program. METHODS In total 152 patients with vertebral fractures who had undergone percutaneous vertebroplasty (PVP) and anti-osteoporosis treatment were randomly divided into observation and control groups. The observation group performed exercises to strengthen the back muscles after surgery. The clinical efficacy and incidence of re-fractures were compared between groups. RESULTS The observation group had reduced physical dysfunction and pain following surgery. After 3 months, the vertebral body height had significantly decreased (P < 0.05) in the control group but not in the observation group (P > 0.05). In the observation and control groups, the incidence of vertebral refractures was 9.2% (7/76) and 17.1% (13/76), respectively (P < 0.05). CONCLUSION Postoperative exercise to strengthen the back muscles can improve physical function, relieve pain and promote the recovery of vertebral height; it can also assist in maintaining bone density, thereby significantly reducing the risk of refracture. This approach is safe and effective and can help improve the quality of life in patients with vertebral fractures.
Collapse
Affiliation(s)
- DeLi Deng
- Southern Medical University, 1063 South Road of Jinxishatai, 510515, Guangzhou, China
- Department of Orthopedics, Panyu Central Hospital, 8 Fuyu East Road, Southbridge Street, 511400, Panyu, Guangzhou, China
| | - Zhen Lian
- Department of Orthopedics, The Second Affiliated Hospital, Shantou University Medical College, The Dong Xia Bei Road, 515041, Shantou, Guangdong, China
- Shantou University Medical College, 515000, Shantou, Guangdong, China
| | - WenFei Cui
- Department of Orthopedics, Panyu Central Hospital, 8 Fuyu East Road, Southbridge Street, 511400, Panyu, Guangzhou, China
| | - HeSheng Liang
- Department of Orthopedics, Panyu Central Hospital, 8 Fuyu East Road, Southbridge Street, 511400, Panyu, Guangzhou, China
| | - LiJun Xiao
- Department of Orthopedics, Panyu Central Hospital, 8 Fuyu East Road, Southbridge Street, 511400, Panyu, Guangzhou, China
| | - Guanfeng Yao
- Department of Orthopedics, The Second Affiliated Hospital, Shantou University Medical College, The Dong Xia Bei Road, 515041, Shantou, Guangdong, China.
| |
Collapse
|
41
|
Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, Geis JR, Pandharipande PV, Brink JA, Dreyer KJ. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018; 288:318-328. [PMID: 29944078 DOI: 10.1148/radiol.2018171820] [Citation(s) in RCA: 434] [Impact Index Per Article: 72.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
Collapse
Affiliation(s)
- Garry Choy
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Omid Khalilzadeh
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Mark Michalski
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Synho Do
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Anthony E Samir
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Oleg S Pianykh
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - J Raymond Geis
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Pari V Pandharipande
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - James A Brink
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Keith J Dreyer
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| |
Collapse
|
42
|
Computer-aided detection in musculoskeletal projection radiography: A systematic review. Radiography (Lond) 2018; 24:165-174. [DOI: 10.1016/j.radi.2017.11.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Revised: 10/31/2017] [Accepted: 11/16/2017] [Indexed: 11/17/2022]
|
43
|
Teomete U, Tulum G, Ergin T, Cuce F, Koksal M, Dandin O, Osman O. Automated computer-aided diagnosis of splenic lesions due to abdominal trauma. Hippokratia 2018; 22:80-85. [PMID: 31217680 PMCID: PMC6548527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND Computer-aided detection in the setting of trauma presents unique challenges due to variations in shape and attenuation of the injured organs based on the timing and severity of the injury. We developed and validated an automated computer-aided diagnosis algorithm to detect splenic lesions such as laceration, contusion, subcapsular hematoma, perisplenic hematoma, and active extravasation using computed tomography (CT) images in patients sustaining blunt or penetrating abdominal trauma. METHODS We categorized the splenic pathologies into three groups: contusion/laceration, hematoma, and active extravasation. We first analyzed the spleen and perisplenic region by estimating the mean value and standard deviation of the spleen. We determined adaptive threshold values based on the histogram of the area and detected the lesions after morphological operations and volumetric comparisons. RESULTS The overall performance of the three computer-aided diagnosis (CAD) algorithms is an accuracy of 0.80, sensitivity of 0.95, specificity of 0.67, and a diagnostic odds ratio (DOR) of 40 with a 95 % confidence interval (CI): 14 to 117. The CAD of perisplenic hematoma had the highest diagnosis rates with an accuracy of 0.90, a sensitivity of 0.95, specificity of 0.80, and DOR of 76 with a 95 % CI: 13 to 442. CONCLUSIONS We developed a new algorithm to detect post-traumatic splenic lesions automatically and with high accuracy. Our method could potentially lead to the automated diagnosis of all traumatic abdominal pathologies. HIPPOKRATIA 2018, 22(2): 80-85.
Collapse
Affiliation(s)
- U Teomete
- Department of Radiology, Sparrow Health System, Michigan, USA
| | - G Tulum
- Department of Electrical and Electronics Eng, Istanbul Arel University, Istanbul, Turkey
| | - T Ergin
- Department of Radiology, Gulhane Research and Training Hospital, Ankara, Turkey
| | - F Cuce
- Department of Radiology, Gulhane Research and Training Hospital, Ankara, Turkey
| | - M Koksal
- Department of Radiology, Ankara Numune Training and Research Hospital, Ankara, Turkey
| | - O Dandin
- Department of General Surgery, Gulhane Research and Training Hospital, Ankara, Turkey
| | - O Osman
- Department of Electrical and Electronics Eng, Istanbul Arel University, Istanbul, Turkey
| |
Collapse
|
44
|
Vogl TJ, Eichler K, Marzi I, Wutzler S, Zacharowski K, Frellessen C. [Imaging techniques in modern trauma diagnostics]. Unfallchirurg 2018; 120:417-431. [PMID: 28455618 DOI: 10.1007/s00113-017-0352-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Modern trauma room management requires interdisciplinary teamwork and synchronous communication between a team of anaesthesists, surgeons and radiologists. As the length of stay in the trauma room influences morbidity and mortality of a severely injured person, optimizing time is one of the main targets. With the direct involvement of modern imaging techniques the injuries caused by trauma should be detected within a very short period of time in order to enable a priority-orientated treatment. Radiology influences structure and process quality, management and development of trauma room algorithms regarding the use of imaging techniques. For the individual case interventional therapy methods can be added. Based on current data and on the Frankfurt experience the current diagnostic concepts of trauma diagnostics are presented.
Collapse
Affiliation(s)
- T J Vogl
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Theodor-Stern-Kai 7, 60590, Frankfurt, Deutschland.
| | - K Eichler
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Theodor-Stern-Kai 7, 60590, Frankfurt, Deutschland
| | - I Marzi
- Zentrum der Chirurgie, Klinik für Unfall-, Hand- und Wiederherstellungschirurgie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Frankfurt, Deutschland
| | - S Wutzler
- Zentrum der Chirurgie, Klinik für Unfall-, Hand- und Wiederherstellungschirurgie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Frankfurt, Deutschland
| | - K Zacharowski
- Klinik für Anästhesiologie, Intensivmedizin und Schmerztherapie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Frankfurt, Deutschland
| | - C Frellessen
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsklinikum Frankfurt, Johann Wolfgang Goethe-Universität, Theodor-Stern-Kai 7, 60590, Frankfurt, Deutschland
| |
Collapse
|
45
|
Bildgebende Verfahren der modernen Schockraumdiagnostik. Notf Rett Med 2017. [DOI: 10.1007/s10049-017-0376-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
46
|
|
47
|
Reducing Errors From Cognitive Biases Through Quality Improvement Projects. J Am Coll Radiol 2017; 14:852-853. [PMID: 28143751 DOI: 10.1016/j.jacr.2016.10.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 10/24/2016] [Accepted: 10/28/2016] [Indexed: 11/20/2022]
|
48
|
Abstract
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation.
Collapse
|
49
|
Yao J, Burns JE, Forsberg D, Seitel A, Rasoulian A, Abolmaesumi P, Hammernik K, Urschler M, Ibragimov B, Korez R, Vrtovec T, Castro-Mateos I, Pozo JM, Frangi AF, Summers RM, Li S. A multi-center milestone study of clinical vertebral CT segmentation. Comput Med Imaging Graph 2016; 49:16-28. [PMID: 26878138 DOI: 10.1016/j.compmedimag.2015.12.006] [Citation(s) in RCA: 79] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2015] [Revised: 10/22/2015] [Accepted: 12/27/2015] [Indexed: 11/28/2022]
Abstract
A multiple center milestone study of clinical vertebra segmentation is presented in this paper. Vertebra segmentation is a fundamental step for spinal image analysis and intervention. The first half of the study was conducted in the spine segmentation challenge in 2014 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop on Computational Spine Imaging (CSI 2014). The objective was to evaluate the performance of several state-of-the-art vertebra segmentation algorithms on computed tomography (CT) scans using ten training and five testing dataset, all healthy cases; the second half of the study was conducted after the challenge, where additional 5 abnormal cases are used for testing to evaluate the performance under abnormal cases. Dice coefficients and absolute surface distances were used as evaluation metrics. Segmentation of each vertebra as a single geometric unit, as well as separate segmentation of vertebra substructures, was evaluated. Five teams participated in the comparative study. The top performers in the study achieved Dice coefficient of 0.93 in the upper thoracic, 0.95 in the lower thoracic and 0.96 in the lumbar spine for healthy cases, and 0.88 in the upper thoracic, 0.89 in the lower thoracic and 0.92 in the lumbar spine for osteoporotic and fractured cases. The strengths and weaknesses of each method as well as future suggestion for improvement are discussed. This is the first multi-center comparative study for vertebra segmentation methods, which will provide an up-to-date performance milestone for the fast growing spinal image analysis and intervention.
Collapse
Affiliation(s)
- Jianhua Yao
- Imaging Biomarkers and Computer-Aided Detection Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Joseph E Burns
- Department of Radiological Sciences, University of California, Irvine, CA 92868, USA
| | - Daniel Forsberg
- Sectra, Linköping, Sweden & Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Alexander Seitel
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Abtin Rasoulian
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Kerstin Hammernik
- Institute for Computer Graphics and Vision, BioTechMed, Graz University of Technology, Graz, Austria
| | - Martin Urschler
- Ludwig Boltzmann Institute for Clinical Forensic Imaging, Graz, Austria
| | - Bulat Ibragimov
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Robert Korez
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Tomaž Vrtovec
- University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
| | - Isaac Castro-Mateos
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), Department of Mechanical Engineering, University of Sheffield, Sheffield, UK
| | - Jose M Pozo
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), Department of Mechanical Engineering, University of Sheffield, Sheffield, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), Department of Mechanical Engineering, University of Sheffield, Sheffield, UK
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Detection Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Shuo Li
- GE Healthcare & University of Western Ontario, London, ON, Canada.
| |
Collapse
|