1
|
Salari E, Wang J, Wynne JF, Chang C, Wu Y, Yang X. Artificial intelligence-based motion tracking in cancer radiotherapy: A review. J Appl Clin Med Phys 2024; 25:e14500. [PMID: 39194360 PMCID: PMC11540048 DOI: 10.1002/acm2.14500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 07/13/2024] [Accepted: 07/27/2024] [Indexed: 08/29/2024] Open
Abstract
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Artificial intelligence (AI) has recently demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges, including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review presents the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provides a literature summary on the topic. We will also discuss the limitations of these AI-based studies and propose potential improvements.
Collapse
Affiliation(s)
- Elahheh Salari
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | | | - Chih‐Wei Chang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Yizhou Wu
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
2
|
Regmi M, Liu W, Liu S, Dai Y, Xiong Y, Yang J, Yang C. The evolution and integration of technology in spinal neurosurgery: A scoping review. J Clin Neurosci 2024; 129:110853. [PMID: 39348790 DOI: 10.1016/j.jocn.2024.110853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 09/19/2024] [Accepted: 09/24/2024] [Indexed: 10/02/2024]
Abstract
Spinal disorders pose a significant global health challenge, affecting nearly 5% of the population and incurring substantial socioeconomic costs. Over time, spinal neurosurgery has evolved from basic 19th-century techniques to today's minimally invasive procedures. The recent integration of technologies such as robotic assistance and advanced imaging has not only improved precision but also reshaped treatment paradigms. This review explores key innovations in imaging, biomaterials, and emerging fields such as AI, examining how they address long-standing challenges in spinal care, including enhancing surgical accuracy and promoting tissue regeneration. Are we at the threshold of a new era in healthcare technology, or are these innovations merely enhancements that may not fundamentally advance clinical care? We aim to answer this question by offering a concise introduction to each technology and discussing in depth its status and challenges, providing readers with a clearer understanding of its actual potential to revolutionize surgical practices.
Collapse
Affiliation(s)
- Moksada Regmi
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China; Peking University Health Science Center, Beijing 100191, China; Henan Academy of Innovations in Medical Science (AIMS), Zhengzhou 450003, China
| | - Weihai Liu
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Shikun Liu
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Yuwei Dai
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Ying Xiong
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Jun Yang
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China
| | - Chenlong Yang
- State Key Laboratory of Vascular Homeostasis and Remodeling, Department of Neurosurgery, Peking University Third Hospital, Peking University, Beijing 100191, China; Center for Precision Neurosurgery and Oncology of Peking University Health Science Center, Peking University, Beijing 100191, China; Henan Academy of Innovations in Medical Science (AIMS), Zhengzhou 450003, China.
| |
Collapse
|
3
|
Hewson EA, Dillon O, Poulsen PR, Booth JT, Keall PJ. Six-degrees-of-freedom pelvic bone monitoring on 2D kV intrafraction images to enable multi-target tracking for locally advanced prostate cancer. Med Phys 2024. [PMID: 39441205 DOI: 10.1002/mp.17465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 09/05/2024] [Accepted: 09/27/2024] [Indexed: 10/25/2024] Open
Abstract
BACKGROUND Patients with locally advanced prostate cancer require the prostate and pelvic lymph nodes to be irradiated simultaneously during radiation therapy treatment. However, relative motion between treatment targets decreases dosimetric conformity. Current treatment methods mitigate this error by having large treatment margins and often prioritize the prostate at patient setup at the cost of lymph node coverage. PURPOSE Treatment accuracy can be improved through real-time multi-target adaptation which requires simultaneous motion monitoring of both the prostate and lymph node targets. This study developed and evaluated an intrafraction pelvic bone motion monitoring method as a surrogate for pelvic lymph node displacement to be combined with prostate motion monitoring to enable multi-target six-degrees-of-freedom (6DoF) tracking using 2D kV projections acquired during treatment. MATERIAL AND METHODS A method to monitor pelvic bone translation and rotation was developed and retrospectively applied to images from 20 patients treated in the TROG 15.01 Stereotactic Prostate Ablative Radiotherapy with Kilovoltage Intrafraction Monitoring (KIM) trial. The pelvic motion monitoring method performed template matching to calculate the 6DoF position of the pelvis from 2D kV images. The method first generated a library of digitally reconstructed radiographs (DRRs) for a range of imaging angles and pelvic rotations. The normalized 2D cross-correlations were then calculated for each incoming kV image and a subset of DRRs and the DRR with the maximum correlation coefficient was used to estimate the pelvis translation and rotation. Translation of the pelvis in the unresolved direction was calculated using a 3D Gaussian probability estimation method. Prostate motion was measured using the KIM marker tracking method. The pelvic motion monitoring method was compared to the ground truth obtained from a 6DoF rigid registration of the CBCT and CT. RESULTS The geometric errors of the pelvic motion monitoring method demonstrated sub-mm and sub-degree accuracy and precision in the translational directions (T LR ${{T}_{{\mathrm{LR}}}}$ ,T SI ${{T}_{{\mathrm{SI}}}}$ ,T AP ${{T}_{{\mathrm{AP}}}}$ ) and rotational directions (R LR ${{R}_{{\mathrm{LR}}}}$ ,R SI ${{R}_{{\mathrm{SI}}}}$ ,R AP ${{R}_{{\mathrm{AP}}}}$ ). The 3D relative displacement between the prostate and pelvic bones exceeded 2, 3, 5, and 7 mm for approximately 66%, 44%, 12%, and 7% of the images. CONCLUSIONS Accurate intrafraction pelvic bone motion monitoring in 6DoF was demonstrated on 2D kV images, providing a necessary tool for real-time multi-target motion-adapted treatment.
Collapse
Affiliation(s)
- Emily A Hewson
- Image X Institute, Sydney School of Health Sciences, The University of Sydney, Sydney, Australia
| | - Owen Dillon
- Image X Institute, Sydney School of Health Sciences, The University of Sydney, Sydney, Australia
| | - Per R Poulsen
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Jeremy T Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, Australia
- School of Physics, The University of Sydney, Sydney, Australia
| | - Paul J Keall
- Image X Institute, Sydney School of Health Sciences, The University of Sydney, Sydney, Australia
| |
Collapse
|
4
|
Martín-Noguerol T, Oñate Miranda M, Amrhein TJ, Paulano-Godino F, Xiberta P, Vilanova JC, Luna A. The role of Artificial intelligence in the assessment of the spine and spinal cord. Eur J Radiol 2023; 161:110726. [PMID: 36758280 DOI: 10.1016/j.ejrad.2023.110726] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 01/13/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
Artificial intelligence (AI) application development is underway in all areas of radiology where many promising tools are focused on the spine and spinal cord. In the past decade, multiple spine AI algorithms have been created based on radiographs, computed tomography, and magnetic resonance imaging. These algorithms have wide-ranging purposes including automatic labeling of vertebral levels, automated description of disc degenerative changes, detection and classification of spine trauma, identification of osseous lesions, and the assessment of cord pathology. The overarching goals for these algorithms include improved patient throughput, reducing radiologist workload burden, and improving diagnostic accuracy. There are several pre-requisite tasks required in order to achieve these goals, such as automatic image segmentation, facilitating image acquisition and postprocessing. In this narrative review, we discuss some of the important imaging AI solutions that have been developed for the assessment of the spine and spinal cord. We focus on their practical applications and briefly discuss some key requirements for the successful integration of these tools into practice. The potential impact of AI in the imaging assessment of the spine and cord is vast and promises to provide broad reaching improvements for clinicians, radiologists, and patients alike.
Collapse
Affiliation(s)
| | - Marta Oñate Miranda
- Department of Radiology, Centre Hospitalier Universitaire de Sherbrooke, Sherbrooke, Quebec, Canada.
| | - Timothy J Amrhein
- Department of Radiology, Duke University Medical Center, Durham, USA.
| | | | - Pau Xiberta
- Graphics and Imaging Laboratory (GILAB), University of Girona, 17003 Girona, Spain.
| | - Joan C Vilanova
- Department of Radiology. Clinica Girona, Diagnostic Imaging Institute (IDI), University of Girona, 17002 Girona, Spain.
| | - Antonio Luna
- MRI unit, Radiology department. HT medica, Carmelo Torres n°2, 23007 Jaén, Spain.
| |
Collapse
|
5
|
Petragallo R, Bertram P, Halvorsen P, Iftimia I, Low DA, Morin O, Narayanasamy G, Saenz DL, Sukumar KN, Valdes G, Weinstein L, Wells MC, Ziemer BP, Lamb JM. Development and multi-institutional validation of a convolutional neural network to detect vertebral body mis-alignments in 2D x-ray setup images. Med Phys 2023; 50:2662-2671. [PMID: 36908243 DOI: 10.1002/mp.16359] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 02/11/2023] [Accepted: 02/16/2023] [Indexed: 03/14/2023] Open
Abstract
BACKGROUND Misalignment to the incorrect vertebral body remains a rare but serious patient safety risk in image-guided radiotherapy (IGRT). PURPOSE Our group has proposed that an automated image-review algorithm be inserted into the IGRT process as an interlock to detect off-by-one vertebral body errors. This study presents the development and multi-institutional validation of a convolutional neural network (CNN)-based approach for such an algorithm using patient image data from a planar stereoscopic x-ray IGRT system. METHODS X-rays and digitally reconstructed radiographs (DRRs) were collected from 429 spine radiotherapy patients (1592 treatment fractions) treated at six institutions using a stereoscopic x-ray image guidance system. Clinically-applied, physician approved, alignments were used for true-negative, "no-error" cases. "Off-by-one vertebral body" errors were simulated by translating DRRs along the spinal column using a semi-automated method. A leave-one-institution-out approach was used to estimate model accuracy on data from unseen institutions as follows: All of the images from five of the institutions were used to train a CNN model from scratch using a fixed network architecture and hyper-parameters. The size of this training set ranged from 5700 to 9372 images, depending on exactly which five institutions were contributing data. The training set was randomized and split using a 75/25 split into the final training/ validation sets. X-ray/ DRR image pairs and the associated binary labels of "no-error" or "shift" were used as the model input. Model accuracy was evaluated using images from the sixth institution, which were left out of the training phase entirely. This test set ranged from 180 to 3852 images, again depending on which institution had been left out of the training phase. The trained model was used to classify the images from the test set as either "no-error" or "shifted", and the model predictions were compared to the ground truth labels to assess the model accuracy. This process was repeated until each institution's images had been used as the testing dataset. RESULTS When the six models were used to classify unseen image pairs from the institution left out during training, the resulting receiver operating characteristic area under the curve values ranged from 0.976 to 0.998. With the specificity fixed at 99%, the corresponding sensitivities ranged from 61.9% to 99.2% (mean: 77.6%). With the specificity fixed at 95%, sensitivities ranged from 85.5% to 99.8% (mean: 92.9%). CONCLUSION This study demonstrated the CNN-based vertebral body misalignment model is robust when applied to previously unseen test data from an outside institution, indicating that this proposed additional safeguard against misalignment is feasible.
Collapse
Affiliation(s)
- Rachel Petragallo
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| | | | - Per Halvorsen
- Department of Radiation Oncology, Beth Israel - Lahey Health, Burlington, Massachusetts, USA
| | - Ileana Iftimia
- Department of Radiation Oncology, Beth Israel - Lahey Health, Burlington, Massachusetts, USA
| | - Daniel A Low
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| | - Olivier Morin
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA
| | - Ganesh Narayanasamy
- Department of Radiation Oncology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Daniel L Saenz
- Department of Radiation Oncology, University of Texas HSC SA, San Antonio, Texas, USA
| | - Kevinraj N Sukumar
- Department of Radiation Oncology, Piedmont Healthcare, Atlanta, Georgia, USA
| | - Gilmer Valdes
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA
| | - Lauren Weinstein
- Department of Radiation Oncology, Kaiser Permanente, South San Francisco, California, USA
| | - Michelle C Wells
- Department of Radiation Oncology, Piedmont Healthcare, Atlanta, Georgia, USA
| | - Benjamin P Ziemer
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA
| | - James M Lamb
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| |
Collapse
|
6
|
Cai W, Fan Q, Li F, He X, Zhang P, Cervino L, Li X, Li T. Markerless motion tracking with simultaneous MV and kV imaging in spine SBRT treatment-a feasibility study. Phys Med Biol 2023; 68:10.1088/1361-6560/acae16. [PMID: 36549010 PMCID: PMC9944511 DOI: 10.1088/1361-6560/acae16] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 12/22/2022] [Indexed: 12/24/2022]
Abstract
Objective. Motion tracking with simultaneous MV-kV imaging has distinct advantages over single kV systems. This research is a feasibility study of utilizing this technique for spine stereotactic body radiotherapy (SBRT) through phantom and patient studies.Approach. A clinical spine SBRT plan was developed using 6xFFF beams and nine sliding-window IMRT fields. The plan was delivered to a chest phantom on a linear accelerator. Simultaneous MV-kV image pairs were acquired during beam delivery. KV images were triggered at predefined intervals, and synthetic MV images showing enlarged MLC apertures were created by combining multiple raw MV frames with corrections for scattering and intensity variation. Digitally reconstructed radiograph (DRR) templates were generated using high-resolution CBCT reconstructions (isotropic voxel size (0.243 mm)3) as the reference for 2D-2D matching. 3D shifts were calculated from triangulation of kV-to-DRR and MV-to-DRR registrations. To evaluate tracking accuracy, detected shifts were compared to known phantom shifts as introduced before treatment. The patient study included a T-spine patient and an L-spine patient. Patient datasets were retrospectively analyzed to demonstrate the performance in clinical settings.Main results. The treatment plan was delivered to the phantom in five scenarios: no shift, 2 mm shift in one of the longitudinal, lateral and vertical directions, and 2 mm shift in all the three directions. The calculated 3D shifts agreed well with the actual couch shifts, and overall, the uncertainty of 3D detection is estimated to be 0.3 mm. The patient study revealed that with clinical patient image quality, the calculated 3D motion agreed with the post-treatment cone beam CT. It is feasible to automate both kV-to-DRR and MV-to-DRR registrations using a mutual information-based method, and the difference from manual registration is generally less than 0.3 mm.Significance. The MV-kV imaging-based markerless motion tracking technique was validated through a feasibility study. It is a step forward toward effective motion tracking and accurate delivery for spinal SBRT.
Collapse
Affiliation(s)
- Weixing Cai
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Qiyong Fan
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Feifei Li
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Xiuxiu He
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Pengpeng Zhang
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Laura Cervino
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Xiang Li
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| | - Tianfang Li
- Memorial Sloan Kettering Cancer Center, Department of Medical Physics, 1275 York Avenue, New York, NY 10065, United States of America
| |
Collapse
|
7
|
Deep Learning Approaches for Automatic Localization in Medical Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6347307. [PMID: 35814554 PMCID: PMC9259335 DOI: 10.1155/2022/6347307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/23/2022] [Indexed: 12/21/2022]
Abstract
Recent revolutionary advances in deep learning (DL) have fueled several breakthrough achievements in various complicated computer vision tasks. The remarkable successes and achievements started in 2012 when deep learning neural networks (DNNs) outperformed the shallow machine learning models on a number of significant benchmarks. Significant advances were made in computer vision by conducting very complex image interpretation tasks with outstanding accuracy. These achievements have shown great promise in a wide variety of fields, especially in medical image analysis by creating opportunities to diagnose and treat diseases earlier. In recent years, the application of the DNN for object localization has gained the attention of researchers due to its success over conventional methods, especially in object localization. As this has become a very broad and rapidly growing field, this study presents a short review of DNN implementation for medical images and validates its efficacy on benchmarks. This study presents the first review that focuses on object localization using the DNN in medical images. The key aim of this study was to summarize the recent studies based on the DNN for medical image localization and to highlight the research gaps that can provide worthwhile ideas to shape future research related to object localization tasks. It starts with an overview on the importance of medical image analysis and existing technology in this space. The discussion then proceeds to the dominant DNN utilized in the current literature. Finally, we conclude by discussing the challenges associated with the application of the DNN for medical image localization which can drive further studies in identifying potential future developments in the relevant field of study.
Collapse
|
8
|
Qu B, Cao J, Qian C, Wu J, Lin J, Wang L, Ou-Yang L, Chen Y, Yan L, Hong Q, Zheng G, Qu X. Current development and prospects of deep learning in spine image analysis: a literature review. Quant Imaging Med Surg 2022; 12:3454-3479. [PMID: 35655825 PMCID: PMC9131328 DOI: 10.21037/qims-21-939] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 03/04/2022] [Indexed: 10/07/2023]
Abstract
BACKGROUND AND OBJECTIVE As the spine is pivotal in the support and protection of human bodies, much attention is given to the understanding of spinal diseases. Quick, accurate, and automatic analysis of a spine image greatly enhances the efficiency with which spine conditions can be diagnosed. Deep learning (DL) is a representative artificial intelligence technology that has made encouraging progress in the last 6 years. However, it is still difficult for clinicians and technicians to fully understand this rapidly evolving field due to the diversity of applications, network structures, and evaluation criteria. This study aimed to provide clinicians and technicians with a comprehensive understanding of the development and prospects of DL spine image analysis by reviewing published literature. METHODS A systematic literature search was conducted in the PubMed and Web of Science databases using the keywords "deep learning" and "spine". Date ranges used to conduct the search were from 1 January, 2015 to 20 March, 2021. A total of 79 English articles were reviewed. KEY CONTENT AND FINDINGS The DL technology has been applied extensively to the segmentation, detection, diagnosis, and quantitative evaluation of spine images. It uses static or dynamic image information, as well as local or non-local information. The high accuracy of analysis is comparable to that achieved manually by doctors. However, further exploration is needed in terms of data sharing, functional information, and network interpretability. CONCLUSIONS The DL technique is a powerful method for spine image analysis. We believe that, with the joint efforts of researchers and clinicians, intelligent, interpretable, and reliable DL spine analysis methods will be widely applied in clinical practice in the future.
Collapse
Affiliation(s)
- Biao Qu
- Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen, China
| | - Jianpeng Cao
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Chen Qian
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Jinyu Wu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Jianzhong Lin
- Department of Radiology, Zhongshan Hospital of Xiamen University, Xiamen, China
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, China
| | - Lin Ou-Yang
- Department of Medical Imaging of Southeast Hospital, Medical College of Xiamen University, Zhangzhou, China
| | - Yongfa Chen
- Department of Pediatric Orthopedic Surgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Liyue Yan
- Department of Information & Computational Mathematics, Xiamen University, Xiamen, China
| | - Qing Hong
- Biomedical Intelligent Cloud R&D Center, China Mobile Group, Xiamen, China
| | - Gaofeng Zheng
- Department of Instrumental and Electrical Engineering, Xiamen University, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| |
Collapse
|
9
|
Wang Y. Research on Intelligent Target Tracking Algorithm Based on MDNet under Artificial Intelligence. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1550543. [PMID: 35498174 PMCID: PMC9042603 DOI: 10.1155/2022/1550543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/17/2022] [Accepted: 03/21/2022] [Indexed: 11/24/2022]
Abstract
Target tracking is an important subject in computer vision technology, which has developed rapidly in recent ten years, and its application have become wider and wider. In this process, it has transferred from a simple experimental tracking environment to a complex real scene where more challenges need to be solved. The rapid development of deep learning has promoted the research progress of digital vision. Target tracking technology is an important foundation of digital vision research, which makes it develop from academia to industry. In this paper, a method of target tracking using MDNet is introduced. Starting with the attention mechanism, two attention mechanisms are added to extract and integrate the better features. Case partitioning is used to reduce the investment of tracking module and minimize the network size during tracking, and its result can be prevented from getting worse. Finally, the experiment is analyzed in detail.
Collapse
Affiliation(s)
- Yu Wang
- Chengyi University College, Jimei University, Information Engineering School, Xiamen 361000, China
| |
Collapse
|
10
|
Koo J, Nardella L, Degnan M, Andreozzi J, Yu HHM, Penagaricano J, Johnstone PAS, Oliver D, Ahmed K, Rosenberg SA, Wuthrick E, Diaz R, Feygelman V, Latifi K, Moros EG, Redler G. Triggered kV Imaging During Spine SBRT for Intrafraction Motion Management. Technol Cancer Res Treat 2021; 20:15330338211063033. [PMID: 34855577 PMCID: PMC8649431 DOI: 10.1177/15330338211063033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Purpose: To monitor intrafraction motion during spine stereotactic body radiotherapy(SBRT) treatment delivery with readily available technology, we implemented triggered kV imaging using the on-board imager(OBI) of a modern medical linear accelerator with an advanced imaging package. Methods: Triggered kV imaging for intrafraction motion management was tested with an anthropomorphic phantom and simulated spine SBRT treatments to the thoracic and lumbar spine. The vertebral bodies and spinous processes were contoured as the image guided radiotherapy(IGRT) structures specific to this technique. Upon each triggered kV image acquisition, 2D projections of the IGRT structures were automatically calculated and updated at arbitrary angles for display on the kV images. Various shifts/rotations were introduced in x, y, z, pitch, and yaw. Gantry-angle-based triggering was set to acquire kV images every 45°. A group of physicists/physicians(n = 10) participated in a survey to evaluate clinical efficiency and accuracy of clinical decisions on images containing various phantom shifts. This method was implemented clinically for treatment of 42 patients(94 fractions) with 15 second time-based triggering. Result: Phantom images revealed that IGRT structure accuracy and therefore utility of projected contours during triggered imaging improved with smaller CT slice thickness. Contouring vertebra superior and inferior to the treatment site was necessary to detect clinically relevant phantom rotation. From the survey, detectability was proportional to the shift size in all shift directions and inversely related to the CT slice thickness. Clinical implementation helped evaluate robustness of patient immobilization. Based on visual inspection of projected IGRT contours on planar kV images, appreciable intrafraction motion was detected in eleven fractions(11.7%). Discussion: Feasibility of triggered imaging for spine SBRT intrafraction motion management has been demonstrated in phantom experiments and implementation for patient treatments. This technique allows efficient, non-invasive monitoring of patient position using the OBI and patient anatomy as a direct visual guide.
Collapse
Affiliation(s)
- Jihye Koo
- 7831University of South Florida, 33620, USA.,25301H. Lee Moffitt Cancer Center, 33612, USA
| | | | - Michael Degnan
- 549472The Ohio State University, 43210, Columbus, OH, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | - Gage Redler
- 25301H. Lee Moffitt Cancer Center, 33612, USA
| |
Collapse
|
11
|
Chen Z, Lin L, Wu C, Li C, Xu R, Sun Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun (Lond) 2021; 41:1100-1115. [PMID: 34613667 PMCID: PMC8626610 DOI: 10.1002/cac2.12215] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/10/2021] [Accepted: 09/01/2021] [Indexed: 12/12/2022] Open
Abstract
Over the past decade, artificial intelligence (AI) has contributed substantially to the resolution of various medical problems, including cancer. Deep learning (DL), a subfield of AI, is characterized by its ability to perform automated feature extraction and has great power in the assimilation and evaluation of large amounts of complicated data. On the basis of a large quantity of medical data and novel computational technologies, AI, especially DL, has been applied in various aspects of oncology research and has the potential to enhance cancer diagnosis and treatment. These applications range from early cancer detection, diagnosis, classification and grading, molecular characterization of tumors, prediction of patient outcomes and treatment responses, personalized treatment, automatic radiotherapy workflows, novel anti-cancer drug discovery, and clinical trials. In this review, we introduced the general principle of AI, summarized major areas of its application for cancer diagnosis and treatment, and discussed its future directions and remaining challenges. As the adoption of AI in clinical use is increasing, we anticipate the arrival of AI-powered cancer care.
Collapse
Affiliation(s)
- Zi‐Hang Chen
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
- Zhongshan School of MedicineSun Yat‐sen UniversityGuangzhouGuangdong510080P. R. China
| | - Li Lin
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chen‐Fei Wu
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chao‐Feng Li
- Artificial Intelligence LaboratoryState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Rui‐Hua Xu
- Department of Medical OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Ying Sun
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| |
Collapse
|
12
|
He X, Cai W, Li F, Fan Q, Zhang P, Cuaron JJ, Cerviño LI, Li X, Li T. Decompose kV projection using neural network for improved motion tracking in paraspinal SBRT. Med Phys 2021; 48:7590-7601. [PMID: 34655442 DOI: 10.1002/mp.15295] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 09/23/2021] [Accepted: 09/29/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE On-treatment kV images have been used in tracking patient motion. One challenge of markerless motion tracking in paraspinal SBRT is the reduced contrast when the X-ray beam needs to pass through a large portion of the patient's body, for example, from the lateral direction. Besides, due to the spine's overlapping with the surrounding moving organs in the X-ray images, auto-registration could lead to potential errors. This work aims to automatically extract the spine component from the conventional 2D X-ray images, to achieve more robust and more accurate motion management. METHODS A ResNet generative adversarial network (ResNetGAN) consisting of one generator and one discriminator was developed to learn the mapping between 2D kV image and the reference spine digitally reconstructed radiograph (DRR). A tailored multi-channel multi-domain loss function was used to improve the quality of the decomposed spine image. The trained model took a 2D kV image as input and learned to generate the spine component of the X-ray image. The training dataset included 1347 2D kV thoracic and lumbar region X-ray images from 20 randomly selected patients, and the corresponding matched reference spine DRR. Another 226 2D kV images from the remaining four patients were used for evaluation. The resulted decomposed spine images and the original X-ray images were registered to the reference spine DRRs, to compare the spine tracking accuracy. RESULTS The decomposed spine image had the mean peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of 60.08 and 0.99, respectively, indicating the model retained and enhanced the spine structure information in the original 2D X-ray image. The decomposed spine image matching with the reference spine DRR had submillimeter accuracy (in mm) with a mean error of 0.13, 0.12, and a maximum of 0.58, 0.49 in the x - and y -directions (in the imager coordinates), respectively. The accuracy improvement is robust in all lateral and anteroposterior X-ray beam angles. CONCLUSION We developed a deep learning-based approach to remove soft tissues in the kV image, leading to more accurate spine tracking in paraspinal SBRT.
Collapse
Affiliation(s)
- Xiuxiu He
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Weixing Cai
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Feifei Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Qiyong Fan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - John J Cuaron
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Laura I Cerviño
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Xiang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| | - Tianfang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
13
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
14
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 98] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|