1
|
Büttgen LE, Werner R, Gauer T. Stability analysis of patient-specific 4DCT- and 4DCBCT-based correspondence models. Med Phys 2024; 51:5890-5900. [PMID: 39032078 DOI: 10.1002/mp.17304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 06/26/2024] [Accepted: 06/29/2024] [Indexed: 07/22/2024] Open
Abstract
BACKGROUND Surrogate-based motion compensation in stereotactic body radiation therapy (SBRT) strongly relies on a constant relationship between an external breathing signal and the internal tumor motion over the course of treatment, that is, a stable patient-specific correspondence model. PURPOSE This study aims to develop methods for analyzing the stability of correspondence models by integrating planning 4DCT and pretreatment 4D cone-beam computed tomography (4DCBCT) data and assessing the relation to patient-specific clinical parameters. METHODS For correspondence modeling, a regression-based approach is applied, correlating patient-specific internal motion (vector fields computed by deformable image registration) and external breathing signals (recorded by Varian's RPM and RGSC system). To analyze correspondence model stability, two complementary methods are proposed. (1) Target volume-based analysis: 4DCBCT-based correspondence models predict clinical target volumes (GTV and internal target volume [ITV]) within the planning 4DCT, which are evaluated by overlap and distance measures (Dice similarity coefficient [DSC]/average symmetric surface distance [ASSD]). (2) System matrix-based analysis: 4DCBCT-based regression models are compared to 4DCT-based models using mean squared difference (MSD) and principal component analysis of the system matrices. Stability analysis results are correlated with clinical parameters. Both methods are applied to a dataset of 214 pretreatment 4DCBCT scans (Varian TrueBeam) from a cohort of 46 lung tumor patients treated with ITV-based SBRT (planning 4DCTs acquired with Siemens AS Open and SOMATOM go.OPEN Pro CT scanners). RESULTS Consistent results across the two complementary analysis approaches (Spearman correlation coefficient of0.6 / 0.7 $0.6/ 0.7$ between system matrix-based MSD and GTV-based DSC/ASSD) were observed. Analysis showed that stability was not predominant, with 114/214 fraction-wise models not surpassing a threshold ofD S C > 0.7 $DSC > 0.7$ for the GTV, and only 14/46 patients demonstrating aD S C > 0.7 $DSC > 0.7$ in all fractions. Model stability did not degrade over the course of treatment. The mean GTV-based DSC is0.59 ± 0.26 $0.59\pm 0.26$ (mean ASSD of2.83 ± 3.37 $2.83\pm 3.37$ ) and the respective ITV-based DSC is0.69 ± 0.20 $0.69\pm 0.20$ (mean ASSD of2.35 ± 1.81 $2.35\pm 1.81$ ). The clinical parameters showed a strong correlation between smaller tumor motion ranges and increased stability. CONCLUSIONS The proposed methods identify patients with unstable correspondence models prior to each treatment fraction, serving as direct indicators for the necessity of replanning and adaptive treatment approaches to account for internal-external motion variations throughout the course of treatment.
Collapse
Affiliation(s)
- Laura Esther Büttgen
- Department of Radiotherapy and Radio-Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - René Werner
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Tobias Gauer
- Department of Radiotherapy and Radio-Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
2
|
Zou J, Song Y, Liu L, Aviles-Rivero AI, Qin J. Unsupervised lung CT image registration via stochastic decomposition of deformation fields. Comput Med Imaging Graph 2024; 115:102397. [PMID: 38735104 DOI: 10.1016/j.compmedimag.2024.102397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 01/30/2024] [Accepted: 05/01/2024] [Indexed: 05/14/2024]
Abstract
We address the problem of lung CT image registration, which underpins various diagnoses and treatments for lung diseases. The main crux of the problem is the large deformation that the lungs undergo during respiration. This physiological process imposes several challenges from a learning point of view. In this paper, we propose a novel training scheme, called stochastic decomposition, which enables deep networks to effectively learn such a difficult deformation field during lung CT image registration. The key idea is to stochastically decompose the deformation field, and supervise the registration by synthetic data that have the corresponding appearance discrepancy. The stochastic decomposition allows for revealing all possible decompositions of the deformation field. At the learning level, these decompositions can be seen as a prior to reduce the ill-posedness of the registration yielding to boost the performance. We demonstrate the effectiveness of our framework on Lung CT data. We show, through extensive numerical and visual results, that our technique outperforms existing methods.
Collapse
Affiliation(s)
- Jing Zou
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China
| | - Youyi Song
- Department of Data Science, School of Science, China Pharmaceutical University, Nan Jing, 210009, China
| | - Lihao Liu
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, CB30WA, UK
| | - Angelica I Aviles-Rivero
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, CB30WA, UK
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China.
| |
Collapse
|
3
|
Madesta F, Sentker T, Gauer T, Werner R. Deep learning-based conditional inpainting for restoration of artifact-affected 4D CT images. Med Phys 2024; 51:3437-3454. [PMID: 38055336 DOI: 10.1002/mp.16851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/12/2023] [Accepted: 10/16/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND 4D CT imaging is an essential component of radiotherapy of thoracic and abdominal tumors. 4D CT images are, however, often affected by artifacts that compromise treatment planning quality and image information reliability. PURPOSE In this work, deep learning (DL)-based conditional inpainting is proposed to restore anatomically correct image information of artifact-affected areas. METHODS The restoration approach consists of a two-stage process: DL-based detection of common interpolation (INT) and double structure (DS) artifacts, followed by conditional inpainting applied to the artifact areas. In this context, conditional refers to a guidance of the inpainting process by patient-specific image data to ensure anatomically reliable results. The study is based on 65 in-house 4D CT images of lung cancer patients (48 with only slight artifacts, 17 with pronounced artifacts) and two publicly available 4D CT data sets that serve as independent external test sets. RESULTS Automated artifact detection revealed a ROC-AUC of 0.99 for INT and of 0.97 for DS artifacts (in-house data). The proposed inpainting method decreased the average root mean squared error (RMSE) by 52 % (INT) and 59 % (DS) for the in-house data. For the external test data sets, the RMSE improvement is similar (50 % and 59 %, respectively). Applied to 4D CT data with pronounced artifacts (not part of the training set), 72 % of the detectable artifacts were removed. CONCLUSIONS The results highlight the potential of DL-based inpainting for restoration of artifact-affected 4D CT data. Compared to recent 4D CT inpainting and restoration approaches, the proposed methodology illustrates the advantages of exploiting patient-specific prior image information.
Collapse
Affiliation(s)
- Frederic Madesta
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Thilo Sentker
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Tobias Gauer
- Department of Radiotherapy and Radiation Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - René Werner
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
4
|
Zhu F, Wang S, Li D, Li Q. Similarity attention-based CNN for robust 3D medical image registration. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
5
|
Wilms M, Bannister JJ, Mouches P, MacDonald ME, Rajashekar D, Langner S, Forkert ND. Invertible Modeling of Bidirectional Relationships in Neuroimaging With Normalizing Flows: Application to Brain Aging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2331-2347. [PMID: 35324436 DOI: 10.1109/tmi.2022.3161947] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Many machine learning tasks in neuroimaging aim at modeling complex relationships between a brain's morphology as seen in structural MR images and clinical scores and variables of interest. A frequently modeled process is healthy brain aging for which many image-based brain age estimation or age-conditioned brain morphology template generation approaches exist. While age estimation is a regression task, template generation is related to generative modeling. Both tasks can be seen as inverse directions of the same relationship between brain morphology and age. However, this view is rarely exploited and most existing approaches train separate models for each direction. In this paper, we propose a novel bidirectional approach that unifies score regression and generative morphology modeling and we use it to build a bidirectional brain aging model. We achieve this by defining an invertible normalizing flow architecture that learns a probability distribution of 3D brain morphology conditioned on age. The use of full 3D brain data is achieved by deriving a manifold-constrained formulation that models morphology variations within a low-dimensional subspace of diffeomorphic transformations. This modeling idea is evaluated on a database of MR scans of more than 5000 subjects. The evaluation results show that our bidirectional brain aging model (1) accurately estimates brain age, (2) is able to visually explain its decisions through attribution maps and counterfactuals, (3) generates realistic age-specific brain morphology templates, (4) supports the analysis of morphological variations, and (5) can be utilized for subject-specific brain aging simulation.
Collapse
|
6
|
Kim C, Kim H, Kim SW, Goh Y, Park MJ, Kim H, Jeong C, Cho B, Choi EK, Lee SW, Yoon SM, Kim SS, Park JH, Jung J, Song SY, Kwak J. A study of quantitative indicators for slice sorting in cine-mode 4DCT. PLoS One 2022; 17:e0272639. [PMID: 36026490 PMCID: PMC9417040 DOI: 10.1371/journal.pone.0272639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 07/22/2022] [Indexed: 11/28/2022] Open
Abstract
The uncertainties of four-dimensional computed tomography (4DCT), also called as residual motion artefacts (RMA), induced from irregular respiratory patterns can degrade the quality of overall radiotherapy. This study aims to quantify and reduce those uncertainties. A comparative study on quantitative indicators for RMA was performed, and based on this, we proposed a new 4DCT sorting method that is applicable without disrupting the current clinical workflow. In addition to the default phase sorting strategy, both additional amplitude information from external surrogates and the quantitative metric for RMA, investigated in this study, were introduced. The comparison of quantitative indicators and the performance of the proposed sorting method were evaluated via 10 cases of breath-hold (BH) CT and 30 cases of 4DCT. It was confirmed that N-RMSD (normalised root-mean-square-deviation) was best matched to the visual standards of our institute’s regime, manual sorting method, and could accurately represent RMA. The performance of the proposed method to reduce 4DCT uncertainties was improved by about 18.8% in the averaged value of N-RMSD compared to the default phase sorting method. To the best of our knowledge, this is the first study that evaluates RMA indicators using both BHCT and 4DCT with visual-criteria-based manual sorting and proposes an improved 4DCT sorting strategy based on them.
Collapse
Affiliation(s)
- Changhwan Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hojae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Seoul, Republic of Korea
| | - Sung-woo Kim
- Department of Radiation Oncology, Asan Medical Center, Seoul, Republic of Korea
| | - Youngmoon Goh
- Department of Radiation Oncology, Asan Medical Center, Seoul, Republic of Korea
| | - Min-jae Park
- Department of Radiation Oncology, Asan Medical Center, Seoul, Republic of Korea
| | - Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Chiyoung Jeong
- Department of Radiation Oncology, Asan Medical Center, Seoul, Republic of Korea
| | - Byungchul Cho
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Eun Kyung Choi
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang-wook Lee
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Yoon
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Su Ssan Kim
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jin-hong Park
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jinhong Jung
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Si Yeol Song
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jungwon Kwak
- Department of Radiation Oncology, Asan Medical Center, Seoul, Republic of Korea
- * E-mail:
| |
Collapse
|
7
|
He Y, Wang A, Li S, Yang Y, Hao A. Nonfinite-modality data augmentation for brain image registration. Comput Biol Med 2022; 147:105780. [PMID: 35772329 DOI: 10.1016/j.compbiomed.2022.105780] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/24/2022] [Accepted: 06/19/2022] [Indexed: 01/25/2023]
Abstract
Brain image registration is fundamental for brain medical image analysis. However, the lack of paired images with diverse modalities and corresponding ground truth deformations for training hinder its development. We propose a novel nonfinite-modality data augmentation for brain image registration to combat this. Specifically, some available whole-brain segmentation masks, including complete fine brain anatomical structures, are collected from the actual brain dataset, OASIS-3. One whole-brain segmentation mask can generate many nonfinite-modality brain images by randomly merging some fine anatomical structures and subsequently sampling the intensities for each fine anatomical structure using random Gaussian distribution. Furthermore, to get more realistic deformations as the ground truth, an improved 3D Variational Auto-encoder (VAE) is proposed by introducing the intensity-level reconstruction loss and the structure-level reconstruction loss. Based on the generated images and trained improved 3D VAE, a new Synthetic Nonfinite-Modality Brain Image Dataset (SNMBID) is created. Experiments show that pre-training on SNMBID can improve the accuracy of registration. Notably, SNMBID can be a landmark for evaluating other brain registration methods, and the model trained on the SNMBID can be a baseline for the brain image registration task. Our code is available at https://github.com/MangoWAY/SMIBID_BrainRegistration.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- ByteDance Intelligent Creation, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Yikang Yang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
8
|
Wilms M, Ehrhardt J, Forkert ND. Localized Statistical Shape Models for Large-scale Problems With Few Training Data. IEEE Trans Biomed Eng 2022; 69:2947-2957. [PMID: 35271438 DOI: 10.1109/tbme.2022.3158278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Statistical shape models have been successfully used in numerous biomedical image analysis applications where prior shape information is helpful such as organ segmentation or data augmentation when training deep learning models. However, training such models requires large data sets, which are often not available and, hence, shape models frequently fail to represent local details of unseen shapes. This work introduces a kernel-based method to alleviate this problem via so-called model localization. It is specifically designed to be used in large-scale shape modeling scenarios like deep learning data augmentation and fits seamlessly into the classical shape modeling framework. METHOD Relying on recent advances in multi-level shape model localization via distance-based covariance matrix manipulations and Grassmannian-based level fusion, this work proposes a novel and computationally efficient kernel-based localization technique. Moreover, a novel way to improve the specificity of such models via normalizing flow-based density estimation is presented. RESULTS The method is evaluated on the publicly available JSRT/SCR chest X-ray and IXI brain data sets. The results confirm the effectiveness of the kernelized formulation and also highlight the models' improved specificity when utilizing the proposed density estimation method. CONCLUSION This work shows that flexible and specific shape models from few training samples can be generated in a computationally efficient way by combining ideas from kernel theory and normalizing flows. SIGNIFICANCE The proposed method together with its publicly available implementation allows to build shape models from few training samples directly usable for applications like data augmentation.
Collapse
|
9
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
10
|
Ben-Zikri YK, Helguera M, Fetzer D, Shrier DA, Aylward SR, Chittajallu D, Niethammer M, Cahill ND, Linte CA. A Feature-based Affine Registration Method for Capturing Background Lung Tissue Deformation for Ground Glass Nodule Tracking. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 10:521-539. [PMID: 36465979 PMCID: PMC9718421 DOI: 10.1080/21681163.2021.1994471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Lung nodule tracking assessment relies on cross-sectional measurements of the largest lesion profile depicted in initial and follow-up computed tomography (CT) images. However, apparent changes in nodule size assessed via simple image-based measurements may also be compromised by the effect of the background lung tissue deformation on the GGN between the initial and follow-up images, leading to erroneous conclusions about nodule changes due to disease. To compensate for the lung deformation and enable consistent nodule tracking, here we propose a feature-based affine registration method and study its performance vis-a-vis several other registration methods. We implement and test each registration method using both a lung- and a lesion-centered region of interest on ten patient CT datasets featuring twelve nodules, including both benign and malignant GGO lesions containing pure GGNs, part-solid, or solid nodules. We evaluate each registration method according to the target registration error (TRE) computed across 30 - 50 homologous fiducial landmarks surrounding the lesions and selected by expert radiologists in both the initial and follow-up patient CT images. Our results show that the proposed feature-based affine lesion-centered registration yielded a 1.1 ± 1.2 mm TRE, while a Symmetric Normalization deformable registration yielded a 1.2 ± 1.2 mm TRE, and a least-square fit registration of the 30-50 validation fiducial landmark set yielded a 1.5 ± 1.2 mm TRE. Although the deformable registration yielded a slightly higher registration accuracy than the feature-based affine registration, it is significantly more computationally efficient, eliminates the need for ambiguous segmentation of GGNs featuring ill-defined borders, and reduces the susceptibility of artificial deformations introduced by the deformable registration, which may lead to increased similarity between the registered initial and follow-up images, over-compensating for the background lung tissue deformation, and, in turn, compromising the true disease-induced nodule change assessment. We also assessed the registration qualitatively, by visual inspection of the subtraction images, and conducted a pilot pre-clinical study that showed the proposed feature-based lesion-centered affine registration effectively compensates for the background lung tissue deformation between the initial and follow-up images and also serves as a reliable baseline registration method prior to assessing lung nodule changes due to disease.
Collapse
Affiliation(s)
- Yehuda K. Ben-Zikri
- Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
| | - María Helguera
- Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA,Instituto Tecnológico José Mario Molina Pasquel y Henríquez, UnidadLagosdeM oreno, Jalisco, Mexico
| | - David Fetzer
- Dept. of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - David A. Shrier
- Dept. of Radiology, University of Rochester Medical Center, Rochester, NY, USA
| | | | | | - Marc Niethammer
- Dept. of Computer Science, University of North Carolina, Chapel Hill, NC, USA
| | - Nathan D. Cahill
- School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY, USA
| | - Cristian A. Linte
- Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA,Dept. of Biomedical Engineering, Rochester Institute of Technology, Rochester, NY, USA,Corresponding author.
| |
Collapse
|
11
|
Juan-Cruz C, Fast MF, Sonke JJ. A multivariable study of deformable image registration evaluation metrics in 4DCT of thoracic cancer patients. Phys Med Biol 2021; 66:035019. [PMID: 33227717 DOI: 10.1088/1361-6560/abcd18] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Deformable image registration (DIR) accuracy is often validated using manually identified landmarks or known deformations generated using digital or physical phantoms. In daily practice, the application of these approaches is limited since they are time-consuming or require additional equipment. An alternative is the use of metrics automatically derived from the registrations, but their interpretation is not straightforward. In this work we aim to determine the suitability of DIR-derived metrics to validate the accuracy of 4 commonly used DIR algorithms. First, we investigated the DIR accuracy using a landmark-based metric (target registration error (TRE)) and a digital phantom-based metric (known deformation recovery error (KDE)). 4DCT scans of 16 thoracic cancer patients along with corresponding pairwise anatomical landmarks (AL) locations were collected from two public databases. Digital phantoms with known deformations were generated by each DIR algorithm to test all other algorithms and compute KDE. TRE and KDE were evaluated at AL. KDE was additionally quantified in coordinates randomly sampled (RS) inside the lungs. Second, we investigated the associations of 5 DIR-derived metrics (distance discordance metric (DDM), inverse consistency error (ICE), transitivity (TE), spatial (SS) and temporal smoothness (TS)) with DIR accuracy through uni- and multivariable linear regression models. TRE values were found higher compared to KDE values and these varied depending on the phantom used. The algorithm with the best accuracy achieved average values of TRE = 1.1 mm and KDE ranging from 0.3 to 0.8 mm. DDM was the best predictor of DIR accuracy, with moderate correlations (R 2 < 0.61). Poor correlations were obtained at AL for algorithms with better accuracy, which improved when evaluated at RS. Only slight correlation improvement was obtained with a multivariable analysis (R 2 < 0.64). DDM can be a useful metric to identify inaccuracies for different DIR algorithms without employing landmarks or digital phantoms.
Collapse
Affiliation(s)
- Celia Juan-Cruz
- The Netherlands Cancer Institute, Radiotherapy Department, Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands
| | - Martin F Fast
- The Netherlands Cancer Institute, Radiotherapy Department, Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands
| | - Jan-Jakob Sonke
- The Netherlands Cancer Institute, Radiotherapy Department, Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands
| |
Collapse
|
12
|
In-vivo lung biomechanical modeling for effective tumor motion tracking in external beam radiation therapy. Comput Biol Med 2021; 130:104231. [PMID: 33524903 DOI: 10.1016/j.compbiomed.2021.104231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 01/03/2021] [Accepted: 01/17/2021] [Indexed: 12/25/2022]
Abstract
Lung cancer is the most common cause of cancer-related death in both men and women. Radiation therapy is widely used for lung cancer treatment; however, respiratory motion presents challenges that can compromise the accuracy and/or effectiveness of radiation treatment. Respiratory motion compensation using biomechanical modeling is a common approach used to address this challenge. This study focuses on the development and validation of a lung biomechanical model that can accurately estimate the motion and deformation of lung tumor. Towards this goal, treatment planning 4D-CT images of lung cancer patients were processed to develop patient-specific finite element (FE) models of the lung to predict the patients' tumor motion/deformation. The tumor motion/deformation was modeled for a full respiration cycle, as captured by the 4D-CT scans. Parameters driving the lung and tumor deformation model were found through an inverse problem formulation. The CT datasets pertaining to the inhalation phases of respiration were used for validating the model's accuracy. The volumetric Dice similarity coefficient between the actual and simulated gross tumor volumes (GTVs) of the patients calculated across respiration phases was found to range between 0.80 ± 0.03 and 0.92 ± 0.01. The average error in estimating tumor's center of mass calculated across respiration phases ranged between 0.50 ± 0.10 (mm) and 1.04 ± 0.57 (mm), indicating a reasonably good accuracy of the proposed model. The proposed model demonstrates favorable accuracy for estimating the lung tumor motion/deformation, and therefore can potentially be used in radiation therapy applications for respiratory motion compensation.
Collapse
|
13
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
14
|
Madesta F, Sentker T, Gauer T, Werner R. Self‐contained deep learning‐based boosting of 4D cone‐beam CT reconstruction. Med Phys 2020; 47:5619-5631. [DOI: 10.1002/mp.14441] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/02/2020] [Accepted: 07/16/2020] [Indexed: 12/25/2022] Open
Affiliation(s)
- Frederic Madesta
- Department of Computational Neuroscience University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| | - Thilo Sentker
- Department of Computational Neuroscience University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
- Department of Radiotherapy and Radio‐Oncology University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| | - Tobias Gauer
- Department of Radiotherapy and Radio‐Oncology University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| | - René Werner
- Department of Computational Neuroscience University Medical Center Hamburg‐Eppendorf Hamburg20246 Germany
| |
Collapse
|
15
|
Fu Y, Lei Y, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. LungRegNet: An unsupervised deformable image registration method for 4D-CT lung. Med Phys 2020; 47:1763-1774. [PMID: 32017141 PMCID: PMC7165051 DOI: 10.1002/mp.14065] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/09/2020] [Accepted: 01/27/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed. METHODS A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction. RESULTS We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively. CONCLUSIONS An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
16
|
Rajashekar D, Wilms M, MacDonald ME, Ehrhardt J, Mouches P, Frayne R, Hill MD, Forkert ND. High-resolution T2-FLAIR and non-contrast CT brain atlas of the elderly. Sci Data 2020; 7:56. [PMID: 32066734 PMCID: PMC7026039 DOI: 10.1038/s41597-020-0379-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 01/10/2020] [Indexed: 01/02/2023] Open
Abstract
Normative brain atlases are a standard tool for neuroscience research and are, for example, used for spatial normalization of image datasets prior to voxel-based analyses of brain morphology and function. Although many different atlases are publicly available, they are usually biased with respect to an imaging modality and the age distribution. Both effects are well known to negatively impact the accuracy and reliability of the spatial normalization process using non-linear image registration methods. An important and very active neuroscience area that lacks appropriate atlases is lesion-related research in elderly populations (e.g. stroke, multiple sclerosis) for which FLAIR MRI and non-contrast CT are often the clinical imaging modalities of choice. To overcome the lack of atlases for these tasks and modalities, this paper presents high-resolution, age-specific FLAIR and non-contrast CT atlases of the elderly generated using clinical images.
Collapse
Affiliation(s)
- Deepthi Rajashekar
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada.
- Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.
| | - Matthias Wilms
- Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - M Ethan MacDonald
- Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
- Healthy Brain Aging Lab, University of Calgary, Calgary, AB, Canada
| | - Jan Ehrhardt
- Institute of Medical Informatics, University of Luebeck, Lübeck, Germany
| | - Pauline Mouches
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Richard Frayne
- Seaman Family MR Research Center, Foothills Medical Centre, Calgary, AB, Canada
- Calgary Image Processing and Analysis Center (CIPAC), Foothills Medical Centre, Calgary, AB, Canada
| | - Michael D Hill
- Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
- Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Nils D Forkert
- Department of Radiology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
- Department of Clinical Neurosciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
17
|
Minaeizaeim H, Kumar H, Tawhai M, King C, Hoffman E, Wilsher M, Milne D, Clark A. Do pulmonary cavity shapes influence lung function? J Biomech Eng 2019; 141:2737110. [PMID: 31233096 DOI: 10.1115/1.4044092] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Indexed: 11/08/2022]
Abstract
Distribution of lung tissue within the chest cavity is a key contributor to delivery of both blood and air to the gas exchange regions of the lung. This distribution is multifactorial with influences from parenchyma, gravity and level of inflation. We hypothesize that the manner in which lung inflates, for example, the primarily diaphragmatic nature of normal breathing, is an important contributor to regional lung tissue distribution. To investigate this hypothesis, we present an organ-level model of lung tissue mechanics which incorporates pleural cavity change due to change in lung volume or posture. We quantify the changes using shape and density metrics in ten healthy subjects scanned supine at end-inspiratory and end-expiratory volumes and ten subjects scanned at both supine and prone end-inspiratory volumes. Comparing end-expiratory to end-inspiratory volume, we see primarily a change in the cranial-caudal dimension of the lung, reflective of movement of diaphragm. In the diaphragmatic region there is greater regional lung expansion than in the cranial aspect, which is restricted by the chest wall. When moving from supine to prone, a restriction of lung was observed anteriorly, resulting in a generally reduced lung volume and a redistribution of air volume posteriorly. In general, we see the highest in lung tissue density heterogeneity in regions of the lung that are most inflated. Using our computational model, we quantify the impact of pleural cavity shape change on regional lung distribution, and predict the impact on regional elastic recoil pressure.
Collapse
Affiliation(s)
- Hamed Minaeizaeim
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Haribalan Kumar
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Merryn Tawhai
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| | - Clair King
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand; Auckland City Hospital, Auckland, New Zealand
| | - Eric Hoffman
- The University of Iowa, Iowa City, Iowa, United States
| | - Margaret Wilsher
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand; Auckland City Hospital, Auckland, New Zealand
| | - David Milne
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand; Auckland City Hospital, Auckland, New Zealand
| | - Alys Clark
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
18
|
Unsupervised pathology detection in medical images using conditional variational autoencoders. Int J Comput Assist Radiol Surg 2018; 14:451-461. [PMID: 30542975 DOI: 10.1007/s11548-018-1898-0] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 12/04/2018] [Indexed: 10/27/2022]
Abstract
PURPOSE Pathology detection in medical image data is an important but a rather complicated task. In particular, the big variability of the pathologies is a challenge to automatic detection methods and even to machine learning methods. Supervised algorithms would usually learn the appearance of a single pathological structure based on a large annotated dataset. As such data is not usually available, especially in large amounts, in this work we pursue a different unsupervised approach. METHODS Our method is based on learning the entire variability of healthy data and detect pathologies by their differences to the learned norm. For this purpose, we use conditional variational autoencoders which learn the reconstruction and encoding distribution of healthy images and also have the ability to integrate certain prior knowledge about the data (condition). RESULTS Our experiments on different 2D and 3D datasets show that the approach is suitable for the detection of pathologies and deliver reasonable Dice coefficients and AUCs. Also this method can estimate missing correspondences in pathological images and thus can be used as a pre-step to a registration method. Our experiments show improving registration results on pathological data when using this approach. CONCLUSIONS Overall the presented approach is suitable for a rough pathology detection in medical images and can be successfully used as a preprocessing step to other image processing methods.
Collapse
|
19
|
Rao F, Li WL, Yin ZP. Non-rigid point cloud registration based lung motion estimation using tangent-plane distance. PLoS One 2018; 13:e0204492. [PMID: 30256830 PMCID: PMC6157875 DOI: 10.1371/journal.pone.0204492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 09/10/2018] [Indexed: 01/31/2023] Open
Abstract
Accurate estimation of motion field in respiration-correlated 4DCT images, is a precondition for the analysis of patient-specific breathing dynamics and subsequent image-supported treatment planning. However, the lung motion estimation often suffers from the sliding motion. In this paper, a novel lung motion method based on the non-rigid registration of point clouds is proposed, and the tangent-plane distance is used to represent the distance term, which describes the difference between two point clouds. Local affine transformation model is used to express the non-rigid deformation of the lung motion. The final objective function is expressed in the Frobenius norm formation, and matrix optimization scheme is carried out to find out the optimal transformation parameters that minimize the objective function. A key advantage of our proposed method is that it alleviates the requirement that the source point cloud and the reference point cloud should be in one-to-one corresponding relationship, and the requirement is difficult to be satisfied in practical application. Furthermore, the proposed method takes the sliding motion of the lung into consideration and improves the registration accuracy by reducing the constraint of the motion along the tangent direction. Non-rigid registration experiments are carried out to validate the performance of the proposed method using popi-model data. The results demonstrate that the proposed method outperforms the traditional method with about 20% accuracy increase.
Collapse
Affiliation(s)
- Fan Rao
- State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, People’s Republic of China
| | - Wen-long Li
- State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, People’s Republic of China
- * E-mail:
| | - Zhou-ping Yin
- State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, People’s Republic of China
| |
Collapse
|
20
|
Mogadas N, Sothmann T, Knopp T, Gauer T, Petersen C, Werner R. Influence of deformable image registration on 4D dose simulation for extracranial SBRT: A multi-registration framework study. Radiother Oncol 2018; 127:225-232. [PMID: 29606523 DOI: 10.1016/j.radonc.2018.03.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Revised: 03/14/2018] [Accepted: 03/14/2018] [Indexed: 12/25/2022]
Abstract
BACKGROUND AND PURPOSE To evaluate the influence of deformable image registration approaches on correspondence model-based 4D dose simulation in extracranial SBRT by means of open source deformable image registration (DIR) frameworks. MATERIAL AND METHODS Established DIR algorithms of six different open source DIR frameworks were considered and registration accuracy evaluated using freely available 4D image data. Furthermore, correspondence models (regression-based correlation of external breathing signal measurements and internal structure motion field) were built and model accuracy evaluated. Finally, the DIR algorithms were applied for motion field estimation in radiotherapy planning 4D CT data of five lung and five liver lesion patients, correspondence model formation, and model-based 4D dose simulation. Deviations between the original, statically planned and the 4D-simulated VMAT dose distributions were analyzed and correlated to DIR accuracy differences. RESULTS Registration errors varied among the DIR approaches, with lower DIR accuracy translating into lower correspondence modeling accuracy. Yet, for lung metastases, indices of 4D-simulated dose distributions widely agreed, irrespective of DIR accuracy differences. In contrast, liver metastases 4D dose simulation results strongly vary for the different DIR approaches. CONCLUSIONS Especially in treatment areas with low image contrast (e.g. the liver), DIR-based 4D dose simulation results strongly depend on the applied DIR algorithm, drawing resulting dose simulations and indices questionable.
Collapse
Affiliation(s)
- Nik Mogadas
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| | - Thilo Sothmann
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Germany; Department of Radiotherapy and Radio-Oncology, University Medical Center Hamburg-Eppendorf, Germany.
| | - Tobias Knopp
- Section for Biomedical Imaging, University Medical Center Hamburg-Eppendorf, Germany
| | - Tobias Gauer
- Department of Radiotherapy and Radio-Oncology, University Medical Center Hamburg-Eppendorf, Germany
| | - Cordula Petersen
- Department of Radiotherapy and Radio-Oncology, University Medical Center Hamburg-Eppendorf, Germany
| | - René Werner
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| |
Collapse
|
21
|
Fischer P, Faranesh A, Pohl T, Maier A, Rogers T, Ratnayaka K, Lederman R, Hornegger J. An MR-Based Model for Cardio-Respiratory Motion Compensation of Overlays in X-Ray Fluoroscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:47-60. [PMID: 28692969 PMCID: PMC5750091 DOI: 10.1109/tmi.2017.2723545] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In X-ray fluoroscopy, static overlays are used to visualize soft tissue. We propose a system for cardiac and respiratory motion compensation of these overlays. It consists of a 3-D motion model created from real-time magnetic resonance (MR) imaging. Multiple sagittal slices are acquired and retrospectively stacked to consistent 3-D volumes. Slice stacking considers cardiac information derived from the ECG and respiratory information extracted from the images. Additionally, temporal smoothness of the stacking is enhanced. Motion is estimated from the MR volumes using deformable 3-D/3-D registration. The motion model itself is a linear direct correspondence model using the same surrogate signals as slice stacking. In X-ray fluoroscopy, only the surrogate signals need to be extracted to apply the motion model and animate the overlay in real time. For evaluation, points are manually annotated in oblique MR slices and in contrast-enhanced X-ray images. The 2-D Euclidean distance of these points is reduced from 3.85 to 2.75 mm in MR and from 3.0 to 1.8 mm in X-ray compared with the static baseline. Furthermore, the motion-compensated overlays are shown qualitatively as images and videos.
Collapse
|
22
|
Sentker T, Madesta F, Werner R. GDL-FIRE$$^\text {4D}$$: Deep Learning-Based Fast 4D CT Image Registration. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00928-1_86] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
23
|
Sothmann T, Gauer T, Wilms M, Werner R. Correspondence model-based 4D VMAT dose simulation for analysis of local metastasis recurrence after extracranial SBRT. ACTA ACUST UNITED AC 2017; 62:9001-9017. [DOI: 10.1088/1361-6560/aa955b] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
24
|
Chen D, Xie H, Zhang S, Gu L. Lung respiration motion modeling: a sparse motion field presentation method using biplane x-ray images. ACTA ACUST UNITED AC 2017; 62:7855-7873. [DOI: 10.1088/1361-6560/aa8841] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
25
|
Cai K, Yang R, Yue H, Li L, Ou S, Liu F. Dynamic updating atlas for heart segmentation with a nonlinear field-based model. Int J Med Robot 2017; 13. [PMID: 27862910 DOI: 10.1002/rcs.1785] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 08/23/2016] [Indexed: 11/09/2022]
Abstract
BACKGROUND Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. METHODS This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. RESULTS The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). CONCLUSION Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences.
Collapse
Affiliation(s)
- Ken Cai
- School of Information Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou, 510225, China
| | - Rongqian Yang
- Department of Biomedical Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Hongwei Yue
- School of Information Engineering, Wuyi University, Jiangmen, 529020, China
| | - Lihua Li
- Department of Biomedical Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Shanxing Ou
- Department of Radiology, General Hospital of Guangzhou Military Command of PLA, Guangzhou, 510010, China
| | - Feng Liu
- School of Information Technology and Electrical Engineering, the University of Queensland, Brisbane, QLD 4072, Australia
| |
Collapse
|
26
|
Wilms M, Werner R, Yamamoto T, Handels H, Ehrhardt J. Subpopulation-based correspondence modelling for improved respiratory motion estimation in the presence of inter-fraction motion variations. ACTA ACUST UNITED AC 2017; 62:5823-5839. [DOI: 10.1088/1361-6560/aa70cc] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
27
|
BEM-based simulation of lung respiratory deformation for CT-guided biopsy. Int J Comput Assist Radiol Surg 2017; 12:1585-1597. [DOI: 10.1007/s11548-017-1603-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Accepted: 04/27/2017] [Indexed: 12/25/2022]
|
28
|
Sothmann T, Gauer T, Werner R. 4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters. PLoS One 2017; 12:e0172810. [PMID: 28231337 PMCID: PMC5322962 DOI: 10.1371/journal.pone.0172810] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Accepted: 02/09/2017] [Indexed: 12/14/2022] Open
Abstract
Radiotherapy of lung and liver lesions has changed from normofractioned 3D-CRT to stereotactic treatment in a single or few fractions, often employing volumetric arc therapy (VMAT)-based techniques. Potential unintended interference of respiratory target motion and dynamically changing beam parameters during VMAT dose delivery motivates establishing 4D quality assurance (4D QA) procedures to assess appropriateness of generated VMAT treatment plans when taking into account patient-specific motion characteristics. Current approaches are motion phantom-based 4D QA and image-based 4D VMAT dose simulation. Whereas phantom-based 4D QA is usually restricted to a small number of measurements, the computational approaches allow simulating many motion scenarios. However, 4D VMAT dose simulation depends on various input parameters, influencing estimated doses along with mitigating simulation reliability. Thus, aiming at routine use of simulation-based 4D VMAT QA, the impact of such parameters as well as the overall accuracy of the 4D VMAT dose simulation has to be studied in detail–which is the topic of the present work. In detail, we introduce the principles of 4D VMAT dose simulation, identify influencing parameters and assess their impact on 4D dose simulation accuracy by comparison of simulated motion-affected dose distributions to corresponding dosimetric motion phantom measurements. Exploiting an ITV-based treatment planning approach, VMAT treatment plans were generated for a motion phantom and different motion scenarios (sinusoidal motion of different period/direction; regular/irregular motion). 4D VMAT dose simulation results and dose measurements were compared by local 3% / 3 mm γ-evaluation, with the measured dose distributions serving as ground truth. Overall γ-passing rates of simulations and dynamic measurements ranged from 97% to 100% (mean across all motion scenarios: 98% ± 1%); corresponding values for comparison of different day repeat measurements were between 98% and 100%. Parameters of major influence on 4D VMAT dose simulation accuracy were the degree of temporal discretization of the dose delivery process (the higher, the better) and correct alignment of the assumed breathing phases at the beginning of the dose measurements and simulations. Given the high γ-passing rates between simulated motion-affected doses and dynamic measurements, we consider the simulations to provide a reliable basis for assessment of VMAT motion effects that–in the sense of 4D QA of VMAT treatment plans–allows to verify target coverage in hypofractioned VMAT-based radiotherapy of moving targets. Remaining differences between measurements and simulations motivate, however, further detailed studies.
Collapse
Affiliation(s)
- Thilo Sothmann
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Hamburg, Germany
- * E-mail:
| | - Tobias Gauer
- Department of Radiotherapy and Radio-Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Hamburg, Germany
| | - René Werner
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Hamburg, Germany
| |
Collapse
|
29
|
Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging. J Comput Assist Tomogr 2017; 40:899-906. [PMID: 27331925 DOI: 10.1097/rct.0000000000000450] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVE We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). METHODS A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). RESULTS Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. CONCLUSIONS The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.
Collapse
|
30
|
Yi J, Yang H, Yang X, Chen G. Lung motion estimation by robust point matching and spatiotemporal tracking for 4D CT. Comput Biol Med 2016; 78:107-119. [PMID: 27684323 DOI: 10.1016/j.compbiomed.2016.09.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Revised: 09/10/2016] [Accepted: 09/16/2016] [Indexed: 10/21/2022]
Abstract
We propose a deformable registration approach to estimate patient-specific lung motion during free breathing for four-dimensional (4D) computed tomography (CT) based on point matching and tracking between images in different phases. First, a robust point matching (RPM) algorithm coarsely aligns the source phase image onto all other target phase images of 4D CT. Scale-invariant feature transform (SIFT) is introduced into the cost function in order to accelerate and stabilize the convergence of the point matching. Next, the temporal consistency of the estimated lung motion model is preserved by fitting the trajectories of the points in the respiratory phase using L1 norm regularization. Then, the fitted positions of a point along the trajectory are used as the initial positions for the point tracking. Spatial mean-shift iteration is employed to track points in all phase images. The tracked positions in all phases are used to perform RPM again. These steps are repeated until the number of updated points is smaller than a given threshold σ. With this method, the correspondence between the source phase image and other target phase image is established more accurately. Trajectory fitting ensures the estimated trajectory does not fluctuate violently. We evaluated our method by using the public DIR-lab, POPI-model, CREATIS and COPDgene lung datasets. In the experimental results, the proposed method achieved satisfied accuracy for image registration. Our method also preserved the topology of the deformation fields well for image registration with large deformation.
Collapse
Affiliation(s)
- Jianbing Yi
- National High Performance Computing Center at Shenzhen, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, China; College of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, Jiangxi, China
| | - Hao Yang
- Xi'an Electric Power College, Changle West Road 180, Xi'an, Shaanxi, China
| | - Xuan Yang
- National High Performance Computing Center at Shenzhen, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, China.
| | - Guoliang Chen
- National High Performance Computing Center at Shenzhen, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, Guangdong, China
| |
Collapse
|
31
|
|
32
|
Qin B, Shen Z, Zhou Z, Zhou J, Lv Y. Structure matching driven by joint-saliency-structure adaptive kernel regression. Appl Soft Comput 2016. [DOI: 10.1016/j.asoc.2015.10.035] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
33
|
Heinrich MP, Simpson IJ, Papież BW, Brady SM, Schnabel JA. Deformable image registration by combining uncertainty estimates from supervoxel belief propagation. Med Image Anal 2016; 27:57-71. [DOI: 10.1016/j.media.2015.09.005] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Revised: 09/20/2015] [Accepted: 09/22/2015] [Indexed: 11/26/2022]
|
34
|
Fortmeier D, Wilms M, Mastmeyer A, Handels H. Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models. IEEE TRANSACTIONS ON HAPTICS 2015; 8:371-383. [PMID: 26087498 DOI: 10.1109/toh.2015.2445768] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Collapse
|
35
|
A fast algorithm to estimate inverse consistent image transformation based on corresponding landmarks. Comput Med Imaging Graph 2015; 45:84-98. [PMID: 26363254 DOI: 10.1016/j.compmedimag.2015.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Revised: 03/24/2015] [Accepted: 04/17/2015] [Indexed: 10/23/2022]
Abstract
Inverse consistency is an important feature for non-rigid image transformation in medical imaging analysis. In this paper, a simple and efficient inverse consistent image transformation estimation algorithm is proposed to preserve correspondence of landmarks and accelerate convergence. The proposed algorithm estimates both the forward and backward transformations simultaneously in the way that they are inverse to each other based on the correspondence of landmarks. Instead of computing the inverse functions and the inverse consistent transformations, respectively, we combine them together, which can improve computation efficiency significantly. Moreover, radial basis functions (RBFs) based transformation is adopted in our algorithm, which can handle deformation with local or global support. Our algorithm maps one landmark to its corresponding position exactly using the forward and backward transformations. Moreover, our algorithm is employed to estimate the forward and backward transformations in robust point matching, as well to demonstrate the application of our algorithm in image registration. The experiment results of uniform grids and test images indicate the improvement of the proposed algorithm in the aspect of inverse consistency of transformations and the reduction of the computation time of the forward and the backward transformations. The performance of our algorithm applying to robust point matching is evaluated using both brain slices and lung slices. Our experiments show that by combing robust point matching with our algorithm, the registration accuracy can be improved and the smoothness of transformations can be preserved.
Collapse
|
36
|
|