1
|
Zhang J, Wang Y, Bai X, Chen M. Extracting lung contour deformation features with deep learning for internal target motion tracking: a preliminary study. Phys Med Biol 2023; 68:195009. [PMID: 37586388 DOI: 10.1088/1361-6560/acf10e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Accepted: 08/16/2023] [Indexed: 08/18/2023]
Abstract
Objective. To propose lung contour deformation features (LCDFs) as a surrogate to estimate the thoracic internal target motion, and to report their performance by correlating with the changing body using a cascade ensemble model (CEM). LCDFs, correlated to the respiration driver, are employed without patient-specific motion data sampling and additional training before treatment.Approach. LCDFs are extracted by matching lung contours via an encoder-decoder deep learning model. CEM estimates LCDFs from the currently captured body, and then uses the estimated LCDFs to track internal target motion. The accuracy of the proposed LCDFs and CEM were evaluated using 48 targets' motion data, and compared with other published methods.Main results. LCDFs estimated the internal targets with a localization error of 2.6 ± 1.0 mm (average ± standard deviation). CEM reached a localization error of 4.7 ± 0.9 mm and a real-time performance of 256.9 ± 6.0 ms. With no internal anatomy knowledge, they achieved a small accuracy difference (of 0.34∼1.10 mm for LCDFs and of 0.43∼1.75 mm for CEM at the 95% confidence level) with a patient-specific lung biomechanical model and the deformable image registration models.Significance. The results demonstrated the effectiveness of LCDFs and CEM on tracking target motion. LCDFs and CEM are non-invasive, and require no patient-specific training before treatment. They show potential for broad applications.
Collapse
Affiliation(s)
- Jie Zhang
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, People's Republic of China
| | - Yajuan Wang
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, People's Republic of China
| | - Xue Bai
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, People's Republic of China
| | - Ming Chen
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, Zhejiang 310022, People's Republic of China
| |
Collapse
|
2
|
Ahmed AM, Gargett M, Madden L, Mylonas A, Chrystall D, Brown R, Briggs A, Nguyen T, Keall P, Kneebone A, Hruby G, Booth J. Evaluation of deep learning based implanted fiducial markers tracking in pancreatic cancer patients. Biomed Phys Eng Express 2023; 9. [PMID: 36689758 DOI: 10.1088/2057-1976/acb550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/23/2023] [Indexed: 01/24/2023]
Abstract
Real-time target position verification during pancreas stereotactic body radiation therapy (SBRT) is important for the detection of unplanned tumour motions. Fast and accurate fiducial marker segmentation is a Requirement of real-time marker-based verification. Deep learning (DL) segmentation techniques are ideal because they don't require additional learning imaging or prior marker information (e.g., shape, orientation). In this study, we evaluated three DL frameworks for marker tracking applied to pancreatic cancer patient data. The DL frameworks evaluated were (1) a convolutional neural network (CNN) classifier with sliding window, (2) a pretrained you-only-look-once (YOLO) version-4 architecture, and (3) a hybrid CNN-YOLO. Intrafraction kV images collected during pancreas SBRT treatments were used as training data (44 fractions, 2017 frames). All patients had 1-4 implanted fiducial markers. Each model was evaluated on unseen kV images (42 fractions, 2517 frames). The ground truth was calculated from manual segmentation and triangulation of markers in orthogonal paired kV/MV images. The sensitivity, specificity, and area under the precision-recall curve (AUC) were calculated. In addition, the mean-absolute-error (MAE), root-mean-square-error (RMSE) and standard-error-of-mean (SEM) were calculated for the centroid of the markers predicted by the models, relative to the ground truth. The sensitivity and specificity of the CNN model were 99.41% and 99.69%, respectively. The AUC was 0.9998. The average precision of the YOLO model for different values of recall was 96.49%. The MAE of the three models in the left-right, superior-inferior, and anterior-posterior directions were under 0.88 ± 0.11 mm, and the RMSE were under 1.09 ± 0.12 mm. The detection times per frame on a GPU were 48.3, 22.9, and 17.1 milliseconds for the CNN, YOLO, and CNN-YOLO, respectively. The results demonstrate submillimeter accuracy of marker position predicted by DL models compared to the ground truth. The marker detection time was fast enough to meet the requirements for real-time application.
Collapse
Affiliation(s)
- Abdella M Ahmed
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Australia
| | - Maegan Gargett
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Australia
| | - Levi Madden
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Danielle Chrystall
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, NSW, Australia
| | - Ryan Brown
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia
| | - Adam Briggs
- Shoalhaven Cancer Care Centre, Shoalhaven District Memorial Hospital, Nowra, NSW, Australia
| | - Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Paul Keall
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, NSW Australia
| | - Andrew Kneebone
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Northern Clinical School, Sydney Medical School, University of Sydney, NSW, Australia
| | - George Hruby
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Northern Clinical School, Sydney Medical School, University of Sydney, NSW, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, NSW, Australia
| |
Collapse
|
3
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
4
|
Mueller M, Poulsen P, Hansen R, Verbakel W, Berbeco R, Ferguson D, Mori S, Ren L, Roeske JC, Wang L, Zhang P, Keall P. The markerless lung target tracking AAPM Grand Challenge (MATCH) results. Med Phys 2022; 49:1161-1180. [PMID: 34913495 PMCID: PMC8828678 DOI: 10.1002/mp.15418] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 11/16/2021] [Accepted: 12/06/2021] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Lung stereotactic ablative body radiotherapy (SABR) is a radiation therapy success story with level 1 evidence demonstrating its efficacy. To provide real-time respiratory motion management for lung SABR, several commercial and preclinical markerless lung target tracking (MLTT) approaches have been developed. However, these approaches have yet to be benchmarked using a common measurement methodology. This knowledge gap motivated the MArkerless lung target Tracking CHallenge (MATCH). The aim was to localize lung targets accurately and precisely in a retrospective in silico study and a prospective experimental study. METHODS MATCH was an American Association of Physicists in Medicine sponsored Grand Challenge. Common materials for the in silico and experimental studies were the experiment setup including an anthropomorphic thorax phantom with two targets within the lungs, and a lung SABR planning protocol. The phantom was moved rigidly with patient-measured lung target motion traces, which also acted as ground truth motion. In the retrospective in silico study a volumetric modulated arc therapy treatment was simulated and a dataset consisting of treatment planning data and intra-treatment kilovoltage (kV) and megavoltage (MV) images for four blinded lung motion traces was provided to the participants. The participants used their MLTT approach to localize the moving target based on the dataset. In the experimental study, the participants received the phantom experiment setup and five patient-measured lung motion traces. The participants used their MLTT approach to localize the moving target during an experimental SABR phantom treatment. The challenge was open to any participant, and participants could complete either one or both parts of the challenge. For both the in silico and experimental studies the MLTT results were analyzed and ranked using the prospectively defined metric of the percentage of the tracked target position being within 2 mm of the ground truth. RESULTS A total of 30 institutions registered and 15 result submissions were received, four for the in silico study and 11 for the experimental study. The participating MLTT approaches were: Accuray CyberKnife (2), Accuray Radixact (2), BrainLab Vero, C-RAD, and preclinical MLTT (5) on a conventional linear accelerator (Varian TrueBeam). For the in silico study the percentage of the 3D tracking error within 2 mm ranged from 50% to 92%. For the experimental study, the percentage of the 3D tracking error within 2 mm ranged from 39% to 96%. CONCLUSIONS A common methodology for measuring the accuracy of MLTT approaches has been developed and used to benchmark preclinical and commercial approaches retrospectively and prospectively. Several MLTT approaches were able to track the target with sub-millimeter accuracy and precision. The study outcome paves the way for broader clinical implementation of MLTT. MATCH is live, with datasets and analysis software being available online at https://www.aapm.org/GrandChallenge/MATCH/ to support future research.
Collapse
Affiliation(s)
- Marco Mueller
- Corresponding author; Room 221, ACRF Image X institute, 1 Central Ave, Eveleigh NSW 2015, Australia; +61 2 8627 1106,
| | - Per Poulsen
- Danish Center for Particle Therapy and Department of Oncology, Aarhus University Hospital, Aarhus 8200, Denmark
| | - Rune Hansen
- Department of Medical Physics, Aarhus University Hospital, Aarhus 8200, Denmark
| | - Wilko Verbakel
- Amsterdam University Medical Centers, location VUmc, Amsterdam 1081 HV, Netherlands
| | - Ross Berbeco
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana Farber Cancer Institute and Harvard Medical School, Boston, MA 02215, USA
| | | | - Shinichiro Mori
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba 263-0024, Japan
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710, USA
| | - John C. Roeske
- Department of Radiation Oncology, Loyola University Medical Center, Maywood, IL 60153, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center New York, NY, USA
| | - Paul Keall
- ACRF Image X Institute, The University of Sydney, Sydney, NSW 2015, Australia
| |
Collapse
|
5
|
Momin S, Lei Y, Tian Z, Wang T, Roper J, Kesarwala AH, Higgins K, Bradley JD, Liu T, Yang X. Lung tumor segmentation in 4D CT images using motion convolutional neural networks. Med Phys 2021; 48:7141-7153. [PMID: 34469001 DOI: 10.1002/mp.15204] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/19/2021] [Accepted: 08/25/2021] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Manual delineation on all breathing phases of lung cancer 4D CT image datasets can be challenging, exhaustive, and prone to subjective errors because of both the large number of images in the datasets and variations in the spatial location of tumors secondary to respiratory motion. The purpose of this work is to present a new deep learning-based framework for fast and accurate segmentation of lung tumors on 4D CT image sets. METHODS The proposed DL framework leverages motion region convolutional neural network (R-CNN). Through integration of global and local motion estimation network architectures, the network can learn both major and minor changes caused by tumor motion. Our network design first extracts tumor motion information by feeding 4D CT images with consecutive phases into an integrated backbone network architecture, locating volume-of-interest (VOIs) via a regional proposal network and removing irrelevant information via a regional convolutional neural network. Extracted motion information is then advanced into the subsequent global and local motion head network architecture to predict corresponding deformation vector fields (DVFs) and further adjust tumor VOIs. Binary masks of tumors are then segmented within adjusted VOIs via a mask head. A self-attention strategy is incorporated in the mask head network to remove any noisy features that might impact segmentation performance. We performed two sets of experiments. In the first experiment, a five-fold cross-validation on 20 4D CT datasets, each consisting of 10 breathing phases (i.e., 200 3D image volumes in total). The network performance was also evaluated on an additional unseen 200 3D images volumes from 20 hold-out 4D CT datasets. In the second experiment, we trained another model with 40 patients' 4D CT datasets from experiment 1 and evaluated on additional unseen nine patients' 4D CT datasets. The Dice similarity coefficient (DSC), center of mass distance (CMD), 95th percentile Hausdorff distance (HD95 ), mean surface distance (MSD), and volume difference (VD) between the manual and segmented tumor contour were computed to evaluate tumor detection and segmentation accuracy. The performance of our method was quantitatively evaluated against four different methods (VoxelMorph, U-Net, network without global and local networks, and network without attention gate strategy) across all evaluation metrics through a paired t-test. RESULTS The proposed fully automated DL method yielded good overall agreement with the ground truth for contoured tumor volume and segmentation accuracy. Our model yielded significantly better values of evaluation metrics (p < 0.05) than all four competing methods in both experiments. On hold-out datasets of experiment 1 and 2, our method yielded DSC of 0.86 and 0.90 compared to 0.82 and 0.87, 0.75 and 0.83, 081 and 0.89, and 0.81 and 0.89 yielded by VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy. Tumor VD between ground truth and our method was the smallest with the value of 0.50 compared to 0.99, 1.01, 0.92, and 0.93 for between ground truth and VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy, respectively. CONCLUSIONS Our proposed DL framework of tumor segmentation on lung cancer 4D CT datasets demonstrates a significant promise for fully automated delineation. The promising results of this work provide impetus for its integration into the 4D CT treatment planning workflow to improve the accuracy and efficiency of lung radiotherapy.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
6
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
7
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|