1
|
Yunde A, Maki S, Furuya T, Okimatsu S, Inoue T, Miura M, Shiratani Y, Nagashima Y, Maruyama J, Shiga Y, Inage K, Eguchi Y, Orita S, Ohtori S. Conversion of T2-Weighted Magnetic Resonance Images of Cervical Spine Trauma to Short T1 Inversion Recovery (STIR) Images by Generative Adversarial Network. Cureus 2024; 16:e60381. [PMID: 38883049 PMCID: PMC11178942 DOI: 10.7759/cureus.60381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2024] [Indexed: 06/18/2024] Open
Abstract
INTRODUCTION The short T1 inversion recovery (STIR) sequence is advantageous for visualizing ligamentous injuries, but the STIR sequence may be missing in some cases. The purpose of this study was to generate synthetic STIR images from MRI T2-weighted images (T2WI) of patients with cervical spine trauma using a generative adversarial network (GAN). Methods: A total of 969 pairs of T2WI and STIR images were extracted from 79 patients with cervical spine trauma. The synthetic model was trained 100 times, and the performance of the model was evaluated with five-fold cross-validation. Results: As for quantitative validation, the structural similarity score was 0.519±0.1 and the peak signal-to-noise ratio score was 19.37±1.9 dB. As for qualitative validation, the incorporation of synthetic STIR images generated by a GAN alongside T2WI substantially enhances sensitivity in the detection of interspinous ligament injuries, outperforming assessments reliant solely on T2WI. CONCLUSION The GAN model can generate synthetic STIRs from T2 images of cervical spine trauma using image-to-image conversion techniques. The use of a combination of synthetic STIR images generated by a GAN and T2WI improves sensitivity in detecting interspinous ligament injuries compared to assessments that use only T2WI.
Collapse
Affiliation(s)
- Atsushi Yunde
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Satoshi Maki
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Sho Okimatsu
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Takaki Inoue
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Masataka Miura
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yuki Shiratani
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yuki Nagashima
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Juntaro Maruyama
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Sumihisa Orita
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| |
Collapse
|
2
|
Gitto S, Serpi F, Albano D, Risoleo G, Fusco S, Messina C, Sconfienza LM. AI applications in musculoskeletal imaging: a narrative review. Eur Radiol Exp 2024; 8:22. [PMID: 38355767 PMCID: PMC10866817 DOI: 10.1186/s41747-024-00422-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 02/16/2024] Open
Abstract
This narrative review focuses on clinical applications of artificial intelligence (AI) in musculoskeletal imaging. A range of musculoskeletal disorders are discussed using a clinical-based approach, including trauma, bone age estimation, osteoarthritis, bone and soft-tissue tumors, and orthopedic implant-related pathology. Several AI algorithms have been applied to fracture detection and classification, which are potentially helpful tools for radiologists and clinicians. In bone age assessment, AI methods have been applied to assist radiologists by automatizing workflow, thus reducing workload and inter-observer variability. AI may potentially aid radiologists in identifying and grading abnormal findings of osteoarthritis as well as predicting the onset or progression of this disease. Either alone or combined with radiomics, AI algorithms may potentially improve diagnosis and outcome prediction of bone and soft-tissue tumors. Finally, information regarding appropriate positioning of orthopedic implants and related complications may be obtained using AI algorithms. In conclusion, rather than replacing radiologists, the use of AI should instead help them to optimize workflow, augment diagnostic performance, and keep up with ever-increasing workload.Relevance statement This narrative review provides an overview of AI applications in musculoskeletal imaging. As the number of AI technologies continues to increase, it will be crucial for radiologists to play a role in their selection and application as well as to fully understand their potential value in clinical practice. Key points • AI may potentially assist musculoskeletal radiologists in several interpretative tasks.• AI applications to trauma, age estimation, osteoarthritis, tumors, and orthopedic implants are discussed.• AI should help radiologists to optimize workflow and augment diagnostic performance.
Collapse
Affiliation(s)
- Salvatore Gitto
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Francesca Serpi
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
- Dipartimento di Scienze Biomediche, Chirurgiche ed Odontoiatriche, Università degli Studi di Milano, Milan, Italy
| | - Giovanni Risoleo
- Scuola di Specializzazione in Radiodiagnostica, Università degli Studi di Milano, Milan, Italy
| | - Stefano Fusco
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
| | - Carmelo Messina
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Luca Maria Sconfienza
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Cristina Belgioioso 173, Milan, 20157, Italy.
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.
| |
Collapse
|
3
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
4
|
Graf R, Schmitt J, Schlaeger S, Möller HK, Sideri-Lampretsa V, Sekuboyina A, Krieg SM, Wiestler B, Menze B, Rueckert D, Kirschke JS. Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation. Eur Radiol Exp 2023; 7:70. [PMID: 37957426 PMCID: PMC10643734 DOI: 10.1186/s41747-023-00385-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 09/12/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Automated segmentation of spinal magnetic resonance imaging (MRI) plays a vital role both scientifically and clinically. However, accurately delineating posterior spine structures is challenging. METHODS This retrospective study, approved by the ethical committee, involved translating T1-weighted and T2-weighted images into computed tomography (CT) images in a total of 263 pairs of CT/MR series. Landmark-based registration was performed to align image pairs. We compared two-dimensional (2D) paired - Pix2Pix, denoising diffusion implicit models (DDIM) image mode, DDIM noise mode - and unpaired (SynDiff, contrastive unpaired translation) image-to-image translation using "peak signal-to-noise ratio" as quality measure. A publicly available segmentation network segmented the synthesized CT datasets, and Dice similarity coefficients (DSC) were evaluated on in-house test sets and the "MRSpineSeg Challenge" volumes. The 2D findings were extended to three-dimensional (3D) Pix2Pix and DDIM. RESULTS 2D paired methods and SynDiff exhibited similar translation performance and DCS on paired data. DDIM image mode achieved the highest image quality. SynDiff, Pix2Pix, and DDIM image mode demonstrated similar DSC (0.77). For craniocaudal axis rotations, at least two landmarks per vertebra were required for registration. The 3D translation outperformed the 2D approach, resulting in improved DSC (0.80) and anatomically accurate segmentations with higher spatial resolution than that of the original MRI series. CONCLUSIONS Two landmarks per vertebra registration enabled paired image-to-image translation from MRI to CT and outperformed all unpaired approaches. The 3D techniques provided anatomically correct segmentations, avoiding underprediction of small structures like the spinous process. RELEVANCE STATEMENT This study addresses the unresolved issue of translating spinal MRI to CT, making CT-based tools usable for MRI data. It generates whole spine segmentation, previously unavailable in MRI, a prerequisite for biomechanical modeling and feature extraction for clinical applications. KEY POINTS • Unpaired image translation lacks in converting spine MRI to CT effectively. • Paired translation needs registration with two landmarks per vertebra at least. • Paired image-to-image enables segmentation transfer to other domains. • 3D translation enables super resolution from MRI to CT. • 3D translation prevents underprediction of small structures.
Collapse
Affiliation(s)
- Robert Graf
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany.
| | - Joachim Schmitt
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Hendrik Kristian Möller
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Vasiliki Sideri-Lampretsa
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
| | - Anjany Sekuboyina
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Sandro Manuel Krieg
- Department of Neurosurgery, Klinikum Rechts Der Isar, School of Medicine, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Daniel Rueckert
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- Visual Information Processing, Imperial College London, London, UK
| | - Jan Stefan Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
周 家, 郭 红, 陈 红. [Deep learning method for magnetic resonance imaging fluid-attenuated inversion recovery image synthesis]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:903-911. [PMID: 37879919 PMCID: PMC10600433 DOI: 10.7507/1001-5515.202302012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 08/19/2023] [Indexed: 10/27/2023]
Abstract
Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Collapse
Affiliation(s)
- 家柠 周
- 沈阳工业大学 电气工程学院(沈阳 110870)School of Electrical Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China
| | - 红宇 郭
- 沈阳工业大学 电气工程学院(沈阳 110870)School of Electrical Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China
- 东软医疗系统股份有限公司(沈阳 110167)Neusoft Medical System Co. Ltd, Shenyang 110167, P. R. China
| | - 红 陈
- 沈阳工业大学 电气工程学院(沈阳 110870)School of Electrical Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China
| |
Collapse
|
6
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
7
|
Feng Y, Chandio BQ, Thomopoulos SI, Chattopadhyay T, Thompson PM. Variational Autoencoders for Generating Synthetic Tractography-Based Bundle Templates in a Low-Data Setting. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-6. [PMID: 38083771 DOI: 10.1109/embc40787.2023.10340009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
White matter tracts generated from whole brain tractography are often processed using automatic segmentation methods with standard atlases. Atlases are generated from hundreds of subjects, which becomes time-consuming to create and difficult to apply to all populations. In this study, we extended our prior work on using a deep generative model - a Convolutional Variational Autoencoder - to map complex and data-intensive streamlines to a low-dimensional latent space given a limited sample size of 50 subjects from the ADNI3 dataset, to generate synthetic population-specific bundle templates using Kernel Density Estimation (KDE) on streamline embeddings. We conducted a quantitative shape analysis by calculating bundle shape metrics, and found that our bundle templates better capture the shape distribution of the bundles than the atlas data used in the original segmentation derived from young healthy adults. We further demonstrated the use of our framework for direct bundle segmentation from whole-brain tractograms.
Collapse
|
8
|
Zhu J, Chen X, Liu Y, Yang B, Wei R, Qin S, Yang Z, Hu Z, Dai J, Men K. Improving accelerated 3D imaging in MRI-guided radiotherapy for prostate cancer using a deep learning method. Radiat Oncol 2023; 18:108. [PMID: 37393282 DOI: 10.1186/s13014-023-02306-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/21/2023] [Indexed: 07/03/2023] Open
Abstract
PURPOSE This study was to improve image quality for high-speed MR imaging using a deep learning method for online adaptive radiotherapy in prostate cancer. We then evaluated its benefits on image registration. METHODS Sixty pairs of 1.5 T MR images acquired with an MR-linac were enrolled. The data included low-speed, high-quality (LSHQ), and high-speed low-quality (HSLQ) MR images. We proposed a CycleGAN, which is based on the data augmentation technique, to learn the mapping between the HSLQ and LSHQ images and then generate synthetic LSHQ (synLSHQ) images from the HSLQ images. Five-fold cross-validation was employed to test the CycleGAN model. The normalized mean absolute error (nMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), and edge keeping index (EKI) were calculated to determine image quality. The Jacobian determinant value (JDV), Dice similarity coefficient (DSC), and mean distance to agreement (MDA) were used to analyze deformable registration. RESULTS Compared with the LSHQ, the proposed synLSHQ achieved comparable image quality and reduced imaging time by ~ 66%. Compared with the HSLQ, the synLSHQ had better image quality with improvement of 57%, 3.4%, 26.9%, and 3.6% for nMAE, SSIM, PSNR, and EKI, respectively. Furthermore, the synLSHQ enhanced registration accuracy with a superior mean JDV (6%) and preferable DSC and MDA values compared with HSLQ. CONCLUSION The proposed method can generate high-quality images from high-speed scanning sequences. As a result, it shows potential to shorten the scan time while ensuring the accuracy of radiotherapy.
Collapse
Affiliation(s)
- Ji Zhu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
- School of Physics and Technology, Wuhan University, Wuhan, 430072, China
| | - Bining Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ran Wei
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Shirui Qin
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zhuanbo Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Zhihui Hu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Jianrong Dai
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
9
|
Feng Y, Chandio BQ, Thomopoulos SI, Chattopadhyay T, Thompson PM. Variational Autoencoders for Generating Synthetic Tractography-Based Bundle Templates in a Low-Data Setting. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.24.529954. [PMID: 36909490 PMCID: PMC10002615 DOI: 10.1101/2023.02.24.529954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
Abstract
White matter tracts generated from whole brain tractography are often processed using automatic segmentation methods with standard atlases. Atlases are generated from hundreds of subjects, which becomes time-consuming to create and difficult to apply to all populations. In this study, we extended our prior work on using a deep generative model a Convolutional Variational Autoencoder - to map complex and data-intensive streamlines to a low-dimensional latent space given a limited sample size of 50 subjects from the ADNI3 dataset, to generate synthetic population-specific bundle templates using Kernel Density Estimation (KDE) on streamline embeddings. We conducted a quantitative shape analysis by calculating bundle shape metrics, and found that our bundle templates better capture the shape distribution of the bundles than the atlas data used in the original segmentation derived from young healthy adults. We further demonstrated the use of our framework for direct bundle segmentation from whole-brain tractograms.
Collapse
|
10
|
A practical guide to the development and deployment of deep learning models for the orthopedic surgeon: part II. Knee Surg Sports Traumatol Arthrosc 2023; 31:1635-1643. [PMID: 36773057 DOI: 10.1007/s00167-023-07338-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/30/2023] [Indexed: 02/12/2023]
Abstract
Deep learning has the potential to be one of the most transformative technologies to impact orthopedic surgery. Substantial innovation in this area has occurred over the past 5 years, but clinically meaningful advancements remain limited by a disconnect between clinical and technical experts. That is, it is likely that few orthopedic surgeons possess both the clinical knowledge necessary to identify orthopedic problems, and the technical knowledge needed to implement deep learning-based solutions. To maximize the utilization of rapidly advancing technologies derived from deep learning models, orthopedic surgeons should understand the steps needed to design, organize, implement, and evaluate a deep learning project and its workflow. Equipping surgeons with this knowledge is the objective of this three-part editorial review. Part I described the processes involved in defining the problem, team building, data acquisition, curation, labeling, and establishing the ground truth. Building on that, this review (Part II) provides guidance on pre-processing and augmenting the data, making use of open-source libraries/toolkits, and selecting the required hardware to implement the pipeline. Special considerations regarding model training and evaluation unique to deep learning models relative to "shallow" machine learning models are also reviewed. Finally, guidance pertaining to the clinical deployment of deep learning models in the real world is provided. As in Part I, the focus is on applications of deep learning for computer vision and imaging.
Collapse
|
11
|
Moassefi M, Faghani S, Khosravi B, Rouzrokh P, Erickson BJ. Artificial Intelligence in Radiology: Overview of Application Types, Design, and Challenges. Semin Roentgenol 2023; 58:170-177. [PMID: 37087137 DOI: 10.1053/j.ro.2023.01.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 02/17/2023]
|
12
|
Charles YP, Lamas V, Ntilikina Y. Artificial intelligence and treatment algorithms in spine surgery. Orthop Traumatol Surg Res 2023; 109:103456. [PMID: 36302452 DOI: 10.1016/j.otsr.2022.103456] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 05/12/2022] [Accepted: 05/25/2022] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI) is a set of theories and techniques in which machines are used to simulate human intelligence with complex computer programs. The various machine learning (ML) methods are a subtype of AI. They originate from computer science and use algorithms established from analyzing a database to accomplish certain tasks. Among these methods are decision trees or random forests, support vector machines along with artificial neural networks. Convolutive neural networks were inspired from the visual cortex; they process combinations of information used in image or voice recognition. Deep learning (DL) groups together a set of ML methods and is useful for modeling complex relationships with a high degree of abstraction by using multiple layers of artificial neurons. ML techniques have a growing role in spine surgery. The main applications are the segmentation of intraoperative images for surgical navigation or robotics used for pedicle screw placement, the interpretation of images of intervertebral discs or full spine radiographs, which can be automated using ML algorithms. ML techniques can also be used as aids for surgical decision-making in complex fields, such as preoperative evaluation of adult spinal deformity. ML algorithms "learn" from large clinical databases. They make it possible to establish the intraoperative risk level and make a prognosis on how the postoperative functional scores will change over time as a function of the patient profile. These applications open a new path relative to standard statistical analyses. They make it possible to explore more complex relationships with multiple indirect interactions. In the future, AI algorithms could have a greater role in clinical research, evaluating clinical and surgical practices, and conducting health economics analyses.
Collapse
Affiliation(s)
- Yann Philippe Charles
- Service de chirurgie du rachis, hôpitaux universitaires de Strasbourg, université de Strasbourg, 1, avenue Molière, 67200 Strasbourg, France.
| | - Vincent Lamas
- Service de chirurgie du rachis, hôpitaux universitaires de Strasbourg, université de Strasbourg, 1, avenue Molière, 67200 Strasbourg, France
| | - Yves Ntilikina
- Service de chirurgie du rachis, hôpitaux universitaires de Strasbourg, université de Strasbourg, 1, avenue Molière, 67200 Strasbourg, France
| |
Collapse
|
13
|
Abstract
ABSTRACT This review summarizes the existing techniques and methods used to generate synthetic contrasts from magnetic resonance imaging data focusing on musculoskeletal magnetic resonance imaging. To that end, the different approaches were categorized into 3 different methodological groups: mathematical image transformation, physics-based, and data-driven approaches. Each group is characterized, followed by examples and a brief overview of their clinical validation, if present. Finally, we will discuss the advantages, disadvantages, and caveats of synthetic contrasts, focusing on the preservation of image information, validation, and aspects of the clinical workflow.
Collapse
|
14
|
Baur D, Kroboth K, Heyde CE, Voelker A. Convolutional Neural Networks in Spinal Magnetic Resonance Imaging: A Systematic Review. World Neurosurg 2022; 166:60-70. [PMID: 35863650 DOI: 10.1016/j.wneu.2022.07.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 07/08/2022] [Accepted: 07/09/2022] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Convolutional neural networks (CNNs) are being increasingly used in the medical field, especially for image recognition in high-resolution, large-volume data sets. The study represents the current state of research on the application of CNNs in image segmentation and pathology detection in spine magnetic resonance imaging. METHODS For this systematic literature review, the authors performed a systematic initial search of the PubMed/Medline and Web of Science (Core collection) databases for eligible investigations. The authors limited the search to observational studies. Outcome parameters were analyzed according to the inclusion criteria and assigned to 3 groups: 1) segmentation of anatomical structures, 2) segmentation and evaluation of pathologic structures, and 3) specific implementation of CNNs. RESULTS Twenty-four retrospectively designed articles met the inclusion criteria. Publication dates ranged from 2017 to 2021. In total, 14,065 patients with 113,110 analyzed images were included. Most authors trained their network with a training-to-testing ratio of 80/20, while all but 2 articles used 5- to 10-fold cross-validation. Nine articles compared their performance results with other neural networks and algorithms, and all 24 articles described outcomes as positive. CONCLUSIONS State-of-the-art CNNs can detect and segment-specific anatomical landmarks and pathologies across a wide range, comparable to the skills of radiologists and experienced clinicians. With rapidly evolving network architectures and growing medical image databases, the future is likely to show growth in the development and refinement of these capable networks. However, the aid of automated segmentation and classification by neural networks cannot and should not be expected to replace clinical experts.
Collapse
Affiliation(s)
- David Baur
- Department of Orthopedics, Trauma and Plastic Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Katharina Kroboth
- Department of Orthopedics, Trauma and Plastic Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Christoph-Eckhard Heyde
- Department of Orthopedics, Trauma and Plastic Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Anna Voelker
- Department of Orthopedics, Trauma and Plastic Surgery, University Hospital Leipzig, Leipzig, Germany.
| |
Collapse
|
15
|
Cui Y, Zhu J, Duan Z, Liao Z, Wang S, Liu W. Artificial Intelligence in Spinal Imaging: Current Status and Future Directions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11708. [PMID: 36141981 PMCID: PMC9517575 DOI: 10.3390/ijerph191811708] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/14/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
Spinal maladies are among the most common causes of pain and disability worldwide. Imaging represents an important diagnostic procedure in spinal care. Imaging investigations can provide information and insights that are not visible through ordinary visual inspection. Multiscale in vivo interrogation has the potential to improve the assessment and monitoring of pathologies thanks to the convergence of imaging, artificial intelligence (AI), and radiomic techniques. AI is revolutionizing computer vision, autonomous driving, natural language processing, and speech recognition. These revolutionary technologies are already impacting radiology, diagnostics, and other fields, where automated solutions can increase precision and reproducibility. In the first section of this narrative review, we provide a brief explanation of the many approaches currently being developed, with a particular emphasis on those employed in spinal imaging studies. The previously documented uses of AI for challenges involving spinal imaging, including imaging appropriateness and protocoling, image acquisition and reconstruction, image presentation, image interpretation, and quantitative image analysis, are then detailed. Finally, the future applications of AI to imaging of the spine are discussed. AI has the potential to significantly affect every step in spinal imaging. AI can make images of the spine more useful to patients and doctors by improving image quality, imaging efficiency, and diagnostic accuracy.
Collapse
Affiliation(s)
- Yangyang Cui
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Jia Zhu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Zhili Duan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Zhenhua Liao
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Song Wang
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Weiqiang Liu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| |
Collapse
|
16
|
Scalco E, Rizzo G, Mastropietro A. The stability of oncologic MRI radiomic features and the potential role of deep learning: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac60b9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 03/24/2022] [Indexed: 11/11/2022]
Abstract
Abstract
The use of MRI radiomic models for the diagnosis, prognosis and treatment response prediction of tumors has been increasingly reported in literature. However, its widespread adoption in clinics is hampered by issues related to features stability. In the MRI radiomic workflow, the main factors that affect radiomic features computation can be found in the image acquisition and reconstruction phase, in the image pre-processing steps, and in the segmentation of the region of interest on which radiomic indices are extracted. Deep Neural Networks (DNNs), having shown their potentiality in the medical image processing and analysis field, can be seen as an attractive strategy to partially overcome the issues related to radiomic stability and mitigate their impact. In fact, DNN approaches can be prospectively integrated in the MRI radiomic workflow to improve image quality, obtain accurate and reproducible segmentations and generate standardized images. In this review, DNN methods that can be included in the image processing steps of the radiomic workflow are described and discussed, in the light of a detailed analysis of the literature in the context of MRI radiomic reliability.
Collapse
|
17
|
Danilov GV, Shifrin MA, Kotik KV, Ishankulov TA, Orlov YN, Kulikov AS, Potapov AA. Artificial Intelligence Technologies in Neurosurgery: a Systematic Literature Review Using Topic Modeling. Part II: Research Objectives and Perspectives. Sovrem Tekhnologii Med 2021; 12:111-118. [PMID: 34796024 PMCID: PMC8596229 DOI: 10.17691/stm2020.12.6.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Indexed: 12/29/2022] Open
Abstract
The current increase in the number of publications on the use of artificial intelligence (AI) technologies in neurosurgery indicates a new trend in clinical neuroscience. The aim of the study was to conduct a systematic literature review to highlight the main directions and trends in the use of AI in neurosurgery.
Collapse
Affiliation(s)
- G V Danilov
- Scientific Board Secretary; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia; Head of the Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - M A Shifrin
- Scientific Consultant, Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - K V Kotik
- Physics Engineer, Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - T A Ishankulov
- Engineer, Laboratory of Biomedical Informatics and Artificial Intelligence; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - Yu N Orlov
- Head of the Department of Computational Physics and Kinetic Equations; Keldysh Institute of Applied Mathematics, Russian Academy of Sciences, 4 Miusskaya Sq., Moscow, 125047, Russia
| | - A S Kulikov
- Staff Anesthesiologist; N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| | - A A Potapov
- Professor, Academician of the Russian Academy of Sciences, Chief Scientific Supervisor N.N. Burdenko National Medical Research Center for Neurosurgery, Ministry of Health of the Russian Federation, 16, 4 Tverskaya-Yamskaya St., Moscow, 125047, Russia
| |
Collapse
|
18
|
Nishiyama D, Iwasaki H, Taniguchi T, Fukui D, Yamanaka M, Harada T, Yamada H. Deep generative models for automated muscle segmentation in computed tomography scanning. PLoS One 2021; 16:e0257371. [PMID: 34506602 PMCID: PMC8432798 DOI: 10.1371/journal.pone.0257371] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 08/28/2021] [Indexed: 11/18/2022] Open
Abstract
Accurate gluteus medius (GMd) volume evaluation may aid in the analysis of muscular atrophy states and help gain an improved understanding of patient recovery via rehabilitation. However, the segmentation of muscle regions in GMd images for cubic muscle volume assessment is time-consuming and labor-intensive. This study automated GMd-region segmentation from the computed tomography (CT) images of patients diagnosed with hip osteoarthritis using deep learning and evaluated the segmentation accuracy. To this end, 5250 augmented pairs of training data were obtained from five participants, and a conditional generative adversarial network was used to identify the relationships between the image pairs. Using the preserved test datasets, the results of automatic segmentation with the trained deep learning model were compared to those of manual segmentation in terms of the dice similarity coefficient (DSC), volume similarity (VS), and shape similarity (MS). As observed, the average DSC values for automatic and manual segmentations were 0.748 and 0.812, respectively, with a significant difference (p < 0.0001); the average VS values were 0.247 and 0.203, respectively, with no significant difference (p = 0.069); and the average MS values were 1.394 and 1.156, respectively, with no significant difference (p = 0.308). The GMd volumes obtained by automatic and manual segmentation were 246.2 cm3 and 282.9 cm3, respectively. The noninferiority of the DSC obtained by automatic segmentation was verified against that obtained by manual segmentation. Accordingly, the proposed GAN-based automatic GMd-segmentation technique is confirmed to be noninferior to manual segmentation. Therefore, the findings of this research confirm that the proposed method not only reduces time and effort but also facilitates accurate assessment of the cubic muscle volume.
Collapse
Affiliation(s)
- Daisuke Nishiyama
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
- * E-mail:
| | - Hiroshi Iwasaki
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
| | - Takaya Taniguchi
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
| | - Daisuke Fukui
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
| | - Manabu Yamanaka
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
| | - Teiji Harada
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
| | - Hiroshi Yamada
- Department of Orthopedic Surgery, Wakayama Medical University, Wakayama, Japan
| |
Collapse
|
19
|
Stephens ME, O'Neal CM, Westrup AM, Muhammad FY, McKenzie DM, Fagg AH, Smith ZA. Utility of machine learning algorithms in degenerative cervical and lumbar spine disease: a systematic review. Neurosurg Rev 2021; 45:965-978. [PMID: 34490539 DOI: 10.1007/s10143-021-01624-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 06/28/2021] [Accepted: 08/09/2021] [Indexed: 10/20/2022]
Abstract
Machine learning is a rapidly evolving field that offers physicians an innovative and comprehensive mechanism to examine various aspects of patient data. Cervical and lumbar degenerative spine disorders are commonly age-related disease processes that can utilize machine learning to improve patient outcomes with careful patient selection and intervention. The aim of this study is to examine the current applications of machine learning in cervical and lumbar degenerative spine disease. A systematic review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A search of PubMed, Embase, Medline, and Cochrane was conducted through May 31st, 2020, using the following terms: "artificial intelligence" OR "machine learning" AND "neurosurgery" AND "spine." Studies were included if original research on machine learning was utilized in patient care for degenerative spine disease, including radiographic machine learning applications. Studies focusing on robotic applications in neurosurgery, navigation, or stereotactic radiosurgery were excluded. The literature search identified 296 papers, with 35 articles meeting inclusion criteria. There were nine studies involving cervical degenerative spine disease and 26 studies on lumbar degenerative spine disease. The majority of studies for both cervical and lumbar spines utilized machine learning for the prediction of postoperative outcomes, with 5 (55.6%) and 15 (61.5%) studies, respectively. Machine learning applications focusing on degenerative lumbar spine greatly outnumber the current volume of cervical spine studies. The current research in lumbar spine also demonstrates more advanced clinical applications of radiographic, diagnostic, and predictive machine learning models.
Collapse
Affiliation(s)
- Mark E Stephens
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, 1000 N Lincoln Blvd, Suite 4000, Oklahoma City, OK, 73104, USA
| | - Christen M O'Neal
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, 1000 N Lincoln Blvd, Suite 4000, Oklahoma City, OK, 73104, USA
| | - Alison M Westrup
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, 1000 N Lincoln Blvd, Suite 4000, Oklahoma City, OK, 73104, USA
| | - Fauziyya Y Muhammad
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, 1000 N Lincoln Blvd, Suite 4000, Oklahoma City, OK, 73104, USA
| | - Daniel M McKenzie
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, 1000 N Lincoln Blvd, Suite 4000, Oklahoma City, OK, 73104, USA
| | - Andrew H Fagg
- School of Computer Science, University of Oklahoma, Norman, OK, USA
| | - Zachary A Smith
- Department of Neurosurgery, University of Oklahoma Health Sciences Center, 1000 N Lincoln Blvd, Suite 4000, Oklahoma City, OK, 73104, USA.
| |
Collapse
|
20
|
Sveinsson B, Chaudhari AS, Zhu B, Koonjoo N, Torriani M, Gold GE, Rosen MS. Synthesizing Quantitative T2 Maps in Right Lateral Knee Femoral Condyles from Multicontrast Anatomic Data with a Conditional Generative Adversarial Network. Radiol Artif Intell 2021; 3:e200122. [PMID: 34617020 PMCID: PMC8489449 DOI: 10.1148/ryai.2021200122] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Revised: 04/11/2021] [Accepted: 05/03/2021] [Indexed: 04/09/2023]
Abstract
PURPOSE To develop a proof-of-concept convolutional neural network (CNN) to synthesize T2 maps in right lateral femoral condyle articular cartilage from anatomic MR images by using a conditional generative adversarial network (cGAN). MATERIALS AND METHODS In this retrospective study, anatomic images (from turbo spin-echo and double-echo in steady-state scans) of the right knee of 4621 patients included in the 2004-2006 Osteoarthritis Initiative were used as input to a cGAN-based CNN, and a predicted CNN T2 was generated as output. These patients included men and women of all ethnicities, aged 45-79 years, with or at high risk for knee osteoarthritis incidence or progression who were recruited at four separate centers in the United States. These data were split into 3703 (80%) for training, 462 (10%) for validation, and 456 (10%) for testing. Linear regression analysis was performed between the multiecho spin-echo (MESE) and CNN T2 in the test dataset. A more detailed analysis was performed in 30 randomly selected patients by means of evaluation by two musculoskeletal radiologists and quantification of cartilage subregions. Radiologist assessments were compared by using two-sided t tests. RESULTS The readers were moderately accurate in distinguishing CNN T2 from MESE T2, with one reader having random-chance categorization. CNN T2 values were correlated to the MESE values in the subregions of 30 patients and in the bulk analysis of all patients, with best-fit line slopes between 0.55 and 0.83. CONCLUSION With use of a neural network-based cGAN approach, it is feasible to synthesize T2 maps in femoral cartilage from anatomic MRI sequences, giving good agreement with MESE scans.See also commentary by Yi and Fritz in this issue.Keywords: Cartilage Imaging, Knee, Experimental Investigations, Quantification, Vision, Application Domain, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms© RSNA, 2021.
Collapse
Affiliation(s)
- Bragi Sveinsson
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| | - Akshay S. Chaudhari
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| | - Bo Zhu
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| | - Neha Koonjoo
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| | - Martin Torriani
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| | - Garry E. Gold
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| | - Matthew S. Rosen
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Suite 2301, Boston, MA 02129 (B.S., B.Z., N.K., M.S.R.);
Division of Musculoskeletal Imaging and Intervention, Department of Radiology,
Massachusetts General Hospital, Harvard Medical School, Boston, Mass (M.T.);
Department of Radiology, Stanford University, Stanford, Calif (A.S.C., G.E.G.);
and Department of Physics, Harvard University, Cambridge, Mass (M.S.R.)
| |
Collapse
|
21
|
Generating Virtual Short Tau Inversion Recovery (STIR) Images from T1- and T2-Weighted Images Using a Conditional Generative Adversarial Network in Spine Imaging. Diagnostics (Basel) 2021; 11:diagnostics11091542. [PMID: 34573884 PMCID: PMC8467788 DOI: 10.3390/diagnostics11091542] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/15/2021] [Accepted: 08/21/2021] [Indexed: 11/17/2022] Open
Abstract
Short tau inversion recovery (STIR) sequences are frequently used in magnetic resonance imaging (MRI) of the spine. However, STIR sequences require a significant amount of scanning time. The purpose of the present study was to generate virtual STIR (vSTIR) images from non-contrast, non-fat-suppressed T1- and T2-weighted images using a conditional generative adversarial network (cGAN). The training dataset comprised 612 studies from 514 patients, and the validation dataset comprised 141 studies from 133 patients. For validation, 100 original STIR and respective vSTIR series were presented to six senior radiologists (blinded for the STIR type) in independent A/B-testing sessions. Additionally, for 141 real or vSTIR sequences, the testers were required to produce a structured report of 15 different findings. In the A/B-test, most testers could not reliably identify the real STIR (mean error of tester 1-6: 41%; 44%; 58%; 48%; 39%; 45%). In the evaluation of the structured reports, vSTIR was equivalent to real STIR in 13 of 15 categories. In the category of the number of STIR hyperintense vertebral bodies (p = 0.08) and in the diagnosis of bone metastases (p = 0.055), the vSTIR was only slightly insignificantly equivalent. By virtually generating STIR images of diagnostic quality from T1- and T2-weighted images using a cGAN, one can shorten examination times and increase throughput.
Collapse
|
22
|
Shin Y, Yang J, Lee YH. Deep Generative Adversarial Networks: Applications in Musculoskeletal Imaging. Radiol Artif Intell 2021; 3:e200157. [PMID: 34136816 PMCID: PMC8204145 DOI: 10.1148/ryai.2021200157] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 12/12/2022]
Abstract
In recent years, deep learning techniques have been applied in musculoskeletal radiology to increase the diagnostic potential of acquired images. Generative adversarial networks (GANs), which are deep neural networks that can generate or transform images, have the potential to aid in faster imaging by generating images with a high level of realism across multiple contrast and modalities from existing imaging protocols. This review introduces the key architectures of GANs as well as their technical background and challenges. Key research trends are highlighted, including: (a) reconstruction of high-resolution MRI; (b) image synthesis with different modalities and contrasts; (c) image enhancement that efficiently preserves high-frequency information suitable for human interpretation; (d) pixel-level segmentation with annotation sharing between domains; and (e) applications to different musculoskeletal anatomies. In addition, an overview is provided of the key issues wherein clinical applicability is challenging to capture with conventional performance metrics and expert evaluation. When clinically validated, GANs have the potential to improve musculoskeletal imaging. Keywords: Adults and Pediatrics, Computer Aided Diagnosis (CAD), Computer Applications-General (Informatics), Informatics, Skeletal-Appendicular, Skeletal-Axial, Soft Tissues/Skin © RSNA, 2021.
Collapse
Affiliation(s)
- YiRang Shin
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Jaemoon Yang
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| | - Young Han Lee
- From the Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, 250 Seongsanno, Seodaemun-gu, Seoul 220-701, Republic of Korea (Y.S., J.Y., Y.H.L.); Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Republic of Korea (J.Y.); and Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Republic of Korea (J.Y.)
| |
Collapse
|
23
|
Artificial intelligence applications in medical imaging: A review of the medical physics research in Italy. Phys Med 2021; 83:221-241. [DOI: 10.1016/j.ejmp.2021.04.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 03/31/2021] [Accepted: 04/03/2021] [Indexed: 02/06/2023] Open
|
24
|
Chianca V, Cuocolo R, Gitto S, Albano D, Merli I, Badalyan J, Cortese MC, Messina C, Luzzati A, Parafioriti A, Galbusera F, Brunetti A, Sconfienza LM. Radiomic Machine Learning Classifiers in Spine Bone Tumors: A Multi-Software, Multi-Scanner Study. Eur J Radiol 2021; 137:109586. [PMID: 33610852 DOI: 10.1016/j.ejrad.2021.109586] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 11/22/2020] [Accepted: 02/04/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE Spinal lesion differential diagnosis remains challenging even in MRI. Radiomics and machine learning (ML) have proven useful even in absence of a standardized data mining pipeline. We aimed to assess ML diagnostic performance in spinal lesion differential diagnosis, employing radiomic data extracted by different software. METHODS Patients undergoing MRI for a vertebral lesion were retrospectively analyzed (n = 146, 67 males, 79 females; mean age 63 ± 16 years, range 8-89 years) and constituted the train (n = 100) and internal test cohorts (n = 46). Part of the latter had additional prior exams which constituted a multi-scanner, external test cohort (n = 35). Lesions were labeled as benign or malignant (2-label classification), and benign, primary malignant or metastases (3-label classification) for classification analyses. Features extracted via 3D Slicer heterogeneityCAD module (hCAD) and PyRadiomics were independently used to compare different combinations of feature selection methods and ML classifiers (n = 19). RESULTS In total, 90 and 1548 features were extracted by hCAD and PyRadiomics, respectively. The best feature selection method-ML algorithm combination was selected by 10 iterations of 10-fold cross-validation in the training data. For the 2-label classification ML obtained 94% accuracy in the internal test cohort, using hCAD data, and 86% in the external one. For the 3-label classification, PyRadiomics data allowed for 80% and 69% accuracy in the internal and external test sets, respectively. CONCLUSIONS MRI radiomics combined with ML may be useful in spinal lesion assessment. More robust pre-processing led to better consistency despite scanner and protocol heterogeneity.
Collapse
Affiliation(s)
- Vito Chianca
- Clinica di Radiologia EOC, Istituto di Imaging della Svizzera Italiana (IIMSI), Lugano, Switzerland; Ospedale Evangelico Betania, Napoli, Italy
| | - Renato Cuocolo
- Dipartimento di Scienze Biomediche Avanzate, Università degli Studi di Napoli (")Federico II", Napoli, Italy; Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Dipartimento di Ingegneria Elettrica e delle Tecnologie dell'Informazione, Università degli Studi di Napoli "Federico II", Naples, Italy
| | - Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Milano, Italy.
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milano, Italy; Sezione di Scienze Radiologiche, Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata, Università degli Studi di Palermo, Italy
| | - Ilaria Merli
- UOC Radiodiagnostica, Presidio San Carlo Borromeo, ASST Santi Paolo e Carlo, Milano, Italy
| | - Julietta Badalyan
- International Medical School, University of Milan and Russian National Research Medical University, Milano, Italy
| | - Maria Cristina Cortese
- Istituto di Radiologia, Fondazione Policlinico A. Gemelli IRCCS - Università Cattolica Sacro Cuore, Roma, Italy
| | - Carmelo Messina
- IRCCS Istituto Ortopedico Galeazzi, Milano, Italy; Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Milano, Italy
| | | | | | | | - Arturo Brunetti
- Dipartimento di Scienze Biomediche Avanzate, Università degli Studi di Napoli (")Federico II", Napoli, Italy
| | - Luca Maria Sconfienza
- IRCCS Istituto Ortopedico Galeazzi, Milano, Italy; Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Milano, Italy
| |
Collapse
|
25
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
26
|
Miyoshi T, Higaki A, Kawakami H, Yamaguchi O. Automated interpretation of the coronary angioscopy with deep convolutional neural networks. Open Heart 2020; 7:e001177. [PMID: 32404485 PMCID: PMC7228653 DOI: 10.1136/openhrt-2019-001177] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 02/28/2020] [Accepted: 04/16/2020] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Coronary angioscopy (CAS) is a useful modality to assess atherosclerotic changes, but interpretation of the images requires expert knowledge. Deep convolutional neural networks (DCNN) can be used for diagnostic prediction and image synthesis. METHODS 107 images from 47 patients, who underwent CAS in our hospital between 2014 and 2017, and 864 images, selected from 142 MEDLINE-indexed articles published between 2000 and 2019, were analysed. First, we developed a prediction model for the angioscopic findings. Next, we made a generative adversarial networks (GAN) model to simulate the CAS images. Finally, we tried to control the output images according to the angioscopic findings with conditional GAN architecture. RESULTS For both yellow colour (YC) grade and neointimal coverage (NC) grade, we could observe strong correlations between the true grades and the predicted values (YC grade, average r=0.80±0.02, p<0.001; NC grade, average r=0.73±0.02, p<0.001). The binary classification model for the red thrombus yielded 0.71±0.03 F1-score and the area under the receiver operator characteristic curve was 0.91±0.02. The standard GAN model could generate realistic CAS images (average Inception score=3.57±0.06). GAN-based data augmentation improved the performance of the prediction models. In the conditional GAN model, there were significant correlations between given values and the expert's diagnosis in YC grade but not in NC grade. CONCLUSION DCNN is useful in both predictive and generative modelling that can help develop the diagnostic support system for CAS.
Collapse
Affiliation(s)
- Toru Miyoshi
- Department of Cardiology, Ehime Prefectural Imabari Hospital, Imabari, Japan
- Department of Cardiology, Pulmonology, Hypertension and Nephrology, Ehime University Graduate School of Medicine, Toon, Japan
| | - Akinori Higaki
- Department of Cardiology, Pulmonology, Hypertension and Nephrology, Ehime University Graduate School of Medicine, Toon, Japan
- Hypertension and Vascular Research Unit, Lady Davis Institute for Medical Research, Montreal, Quebec, Canada
| | - Hideo Kawakami
- Department of Cardiology, Ehime Prefectural Imabari Hospital, Imabari, Japan
| | - Osamu Yamaguchi
- Department of Cardiology, Pulmonology, Hypertension and Nephrology, Ehime University Graduate School of Medicine, Toon, Japan
| |
Collapse
|
27
|
Chea P, Mandell JC. Current applications and future directions of deep learning in musculoskeletal radiology. Skeletal Radiol 2020; 49:183-197. [PMID: 31377836 DOI: 10.1007/s00256-019-03284-z] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Revised: 07/11/2019] [Accepted: 07/15/2019] [Indexed: 02/02/2023]
Abstract
Deep learning with convolutional neural networks (CNN) is a rapidly advancing subset of artificial intelligence that is ideally suited to solving image-based problems. There are an increasing number of musculoskeletal applications of deep learning, which can be conceptually divided into the categories of lesion detection, classification, segmentation, and non-interpretive tasks. Numerous examples of deep learning achieving expert-level performance in specific tasks in all four categories have been demonstrated in the past few years, although comprehensive interpretation of imaging examinations has not yet been achieved. It is important for the practicing musculoskeletal radiologist to understand the current scope of deep learning as it relates to musculoskeletal radiology. Interest in deep learning from researchers, radiology leadership, and industry continues to increase, and it is likely that these developments will impact the daily practice of musculoskeletal radiology in the near future.
Collapse
Affiliation(s)
- Pauley Chea
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Jacob C Mandell
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
28
|
Abstract
Radiomics deals with the statistical analysis of radiologic image data. In this article, radiomics is introduced and some of its applications are presented. In particular, an example is used to demonstrate that pathology and radiology can work together for better diagnoses. There is no denying that artificial intelligence will find its place in radiology (and pathology). Deep learning in particular will increasingly find applications. However, the impact on clinical routine is more long term and probably gradual, so AI will initially only be used in the form of specialized tools to support everyday clinical practice until methods and programs improve to the extent that AI can also take on more general diagnoses. However, this will not replace pathologists and radiologists in the long term, but rather turn them into "information specialists" who interpret the results obtained and integrate them into clinical contours.
Collapse
Affiliation(s)
- A Demircioğlu
- Institut für Diagnostische und Interventionelle Radiologie und Neuroradiologie, Universitätsklinikum Essen, Hufelandstr. 55, 45147, Essen, Deutschland.
| |
Collapse
|
29
|
Harada GK, Siyaji ZK, Younis S, Louie PK, Samartzis D, An HS. Imaging in Spine Surgery: Current Concepts and Future Directions. Spine Surg Relat Res 2019; 4:99-110. [PMID: 32405554 PMCID: PMC7217684 DOI: 10.22603/ssrr.2020-0011] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 10/03/2019] [Indexed: 12/23/2022] Open
Abstract
OBJECTIVE To review and highlight the historical and recent advances of imaging in spine surgery and to discuss current applications and future directions. METHODS A PubMed review of the current literature was performed on all relevant articles that examined historical and recent imaging techniques used in spine surgery. Studies were examined for their thoroughness in description of various modalities and applications in current and future management. RESULTS We reviewed 97 articles that discussed past, present, and future applications for imaging in spine surgery. Although most historical approaches relied heavily upon basic radiography, more recent advances have begun to expand upon advanced modalities, including the integration of more sophisticated equipment and artificial intelligence. CONCLUSIONS Since the days of conventional radiography, various modalities have emerged and become integral components of the spinal surgeon's diagnostic armamentarium. As such, it behooves the practitioner to remain informed on the current trends and potential developments in spinal imaging, as rapid adoption and interpretation of new techniques may make significant differences in patient management and outcomes. Future directions will likely become increasingly sophisticated as the implementation of machine learning, and artificial intelligence has become more commonplace in clinical practice.
Collapse
Affiliation(s)
- Garrett K Harada
- Department of Orthopaedic Surgery, Division of Spine Surgery, Rush University Medical Center, Chicago, USA
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, USA
| | - Zakariah K Siyaji
- Department of Orthopaedic Surgery, Division of Spine Surgery, Rush University Medical Center, Chicago, USA
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, USA
| | - Sadaf Younis
- Department of Orthopaedic Surgery, Division of Spine Surgery, Rush University Medical Center, Chicago, USA
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, USA
| | - Philip K Louie
- Department of Orthopaedic Surgery, Division of Spine Surgery, Rush University Medical Center, Chicago, USA
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, USA
| | - Dino Samartzis
- Department of Orthopaedic Surgery, Division of Spine Surgery, Rush University Medical Center, Chicago, USA
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, USA
| | - Howard S An
- Department of Orthopaedic Surgery, Division of Spine Surgery, Rush University Medical Center, Chicago, USA
- International Spine Research and Innovation Initiative, Rush University Medical Center, Chicago, USA
| |
Collapse
|